A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
The linear sizes tolerances and fits system modernization
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.
2018-04-01
The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
Introducing the Mean Absolute Deviation "Effect" Size
ERIC Educational Resources Information Center
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
[Effect strength variation in the single group pre-post study design: a critical review].
Maier-Riehle, B; Zwingmann, C
2000-08-01
In Germany, studies in rehabilitation research--in particular evaluation studies and examinations of quality of outcome--have so far mostly been executed according to the uncontrolled one-group pre-post design. Assessment of outcome is usually made by comparing the pre- and post-treatment means of the outcome variables. The pre-post differences are checked, and in case of significance, the results are increasingly presented in form of effect sizes. For this reason, this contribution presents different effect size indices used for the one-group pre-post design--in spite of fundamental doubts which exist in relation to that design due to its limited internal validity. The numerator concerning all effect size indices of the one-group pre-post design is defined as difference between the pre- and post-treatment means, whereas there are different possibilities and recommendations with regard to the denominator and hence the standard deviation that serves as the basis for standardizing the difference of the means. Used above all are standardization oriented towards the standard deviation of the pre-treatment scores, standardization oriented towards the pooled standard deviation of the pre- and post-treatment scores, and standardization oriented towards the standard deviation of the pre-post differences. Two examples are given to demonstrate that the different modes of calculating effect size indices in the one-group pre-post design may lead to very different outcome patterns. Additionally, it is pointed out that effect sizes from the uncontrolled one-group pre-post design generally tend to be higher than effect sizes from studies conducted with control groups. Finally, the pros and cons of the different effect size indices are discussed and recommendations are given.
Weigel, Stefan; Peters, Ruud; Loeschner, Katrin; Grombe, Ringo; Linsinger, Thomas P J
2017-08-01
Single-particle inductively coupled plasma mass spectrometry (sp-ICP-MS) promises fast and selective determination of nanoparticle size and number concentrations. While several studies on practical applications have been published, data on formal, especially interlaboratory validation of sp-ICP-MS, is sparse. An international interlaboratory study was organized to determine repeatability and reproducibility of the determination of the median particle size and particle number concentration of Ag nanoparticles (AgNPs) in chicken meat. Ten laboratories from the European Union, the USA, and Canada determined particle size and particle number concentration of two chicken meat homogenates spiked with polyvinylpyrrolidone (PVP)-stabilized AgNPs. For the determination of the median particle diameter, repeatability standard deviations of 2 and 5% were determined, and reproducibility standard deviations were 15 and 25%, respectively. The equivalent median diameter itself was approximately 60% larger than the diameter of the particles in the spiking solution. Determination of the particle number concentration was significantly less precise, with repeatability standard deviations of 7 and 18% and reproducibility standard deviations of 70 and 90%.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups
Bala, Madhu; Goyal, Virender
2014-01-01
ABSTRACT Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton’s ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. Aim: To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton’s study. Materials and methods: After measuring the teeth on all 100 patients, Bolton’s analysis was performed. Results were compared with Bolton’s means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85. PMID:25356005
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Performance of digital RGB reflectance color extraction for plaque lesion
NASA Astrophysics Data System (ADS)
Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah
2005-01-01
Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.
Feingold, Alan
2009-01-01
The use of growth-modeling analysis (GMA)--including Hierarchical Linear Models, Latent Growth Models, and General Estimating Equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the intervention and control groups that captures the treatment effect is rarely reported. This article first reviews two classes of formulas for effect sizes associated with classical repeated-measures designs that use the standard deviation of either change scores or raw scores for the denominator. It then broadens the scope to subsume GMA, and demonstrates that the independent groups, within-subjects, pretest-posttest control-group, and GMA designs all estimate the same effect size when the standard deviation of raw scores is uniformly used. Finally, it is shown that the correct effect size for treatment efficacy in GMA--the difference between the estimated means of the two groups at end of study (determined from the coefficient for the slope difference and length of study) divided by the baseline standard deviation--is not reported in clinical trials. PMID:19271847
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
NASA Astrophysics Data System (ADS)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-01
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
The repeatability of mean defect with size III and size V standard automated perimetry.
Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A
2013-02-15
The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2005-01-01
The authors argue that a robust version of Cohen's effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohen's effect size. The authors investigated coverage probability for…
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
The geometry of proliferating dicot cells.
Korn, R W
2001-02-01
The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
The truly remarkable universality of half a standard deviation: confirmation through another look.
Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W
2004-10-01
In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.
A meta-analysis of instructional systems applied in science teaching
NASA Astrophysics Data System (ADS)
Willett, John B.; Yamashita, June J. M.; Anderson, Ronald D.
This article is a report of a meta-analysis on the question: What are the effects of different instructional systems used in science teaching? The studies utilized in this meta-analysis were identified by a process that included a systematic screening of all dissertations completed in the field of science education since 1950, an ERIC search of the literature, a systematic screening of selected research journals, and the standard procedure of identifying potentially relevant studies through examination of the bibliographies of the studies reviewed. In all, the 130 studies coded gave rise to 341 effect sizes. The mean effect size produced over all systems was 0.10 with a standard deviation of 0.41, indicating that, on the average, an innovative teaching system in this sample produced one-tenth of a standard deviation better performance than traditional science teaching. Particular kinds of teaching systems, however, produced results that varied from this overall result. Mean effect sizes were also computed by year of publication, form of publication, grade level, and subject matter.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
NASA Technical Reports Server (NTRS)
Yuter, Sandra E.; Kingsmill, David E.; Nance, Louisa B.; Loeffler-Mang, Martin
2006-01-01
Ground-based measurements of particle size and fall speed distributions using a Particle Size and Velocity (PARSIVEL) disdrometer are compa red among samples obtained in mixed precipitation (rain and wet snow) and rain in the Oregon Cascade Mountains and in dry snow in the Rock y Mountains of Colorado. Coexisting rain and snow particles are distinguished using a classification method based on their size and fall sp eed properties. The bimodal distribution of the particles' joint fall speed-size characteristics at air temperatures from 0.5 to 0 C suggests that wet-snow particles quickly make a transition to rain once mel ting has progressed sufficiently. As air temperatures increase to 1.5 C, the reduction in the number of very large aggregates with a diame ter > 10 mm coincides with the appearance of rain particles larger than 6 mm. In this setting. very large raindrops appear to be the result of aggregates melting with minimal breakup rather than formation by c oalescence. In contrast to dry snow and rain, the fall speed for wet snow has a much weaker correlation between increasing size and increasing fall speed. Wet snow has a larger standard deviation of fall spee d (120%-230% relative to dry snow) for a given particle size. The ave rage fall speed for observed wet-snow particles with a diameter great er than or equal to 2.4 mm is 2 m/s with a standard deviation of 0.8 m/s. The large standard deviation is likely related to the coexistence of particles of similar physical size with different percentages of melting. These results suggest that different particle sizes are not required for aggregation since wet-snow particles of the same size can have different fall speeds. Given the large standard deviation of fa ll speeds in wet snow, the collision efficiency for wet snow is likely larger than that of dry snow. For particle sizes between 1 and 10 mm in diameter within mixed precipitation, rain constituted I % of the particles by volume within the isothermal layer at 0 C and 4% of the particles by volume for the region just below the isothermal layer where air temperatures rise from 0" to 0.5"C. As air temperatures increa sed above 0.5 C, the relative proportions of rain versus snow particl es shift dramatically and raindrops become dominant. The value of 0.5 C for the sharp transition in volume fraction from snow to rain is sl ightly lower than the range from 1 .l to 1.7 C often used in hydrolog ical models.
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderon, E; Siergiej, D
2014-06-01
Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Y.; Cheng, T. -L.; Wen, Y. H.
Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less
Lei, Y.; Cheng, T. -L.; Wen, Y. H.
2017-07-05
Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less
Investigations of internal noise levels for different target sizes, contrasts, and noise structures
NASA Astrophysics Data System (ADS)
Han, Minah; Choi, Shinkook; Baek, Jongduk
2014-03-01
To describe internal noise levels for different target sizes, contrasts, and noise structures, Gaussian targets with four different sizes (i.e., standard deviation of 2,4,6 and 8) and three different noise structures(i.e., white, low-pass, and highpass) were generated. The generated noise images were scaled to have standard deviation of 0.15. For each noise type, target contrasts were adjusted to have the same detectability based on NPW, and the detectability of CHO was calculated accordingly. For human observer study, 3 trained observers performed 2AFC detection tasks, and correction rate, Pc, was calculated for each task. By adding proper internal noise level to numerical observer (i.e., NPW and CHO), detectability of human observer was matched with that of numerical observers. Even though target contrasts were adjusted to have the same detectability of NPW observer, detectability of human observer decreases as the target size increases. The internal noise level varies for different target sizes, contrasts, and noise structures, demonstrating different internal noise levels should be considered in numerical observer to predict the detection performance of human observer.
QED is not endangered by the proton's size
NASA Astrophysics Data System (ADS)
De Rújula, A.
2010-10-01
Pohl et al. have reported a very precise measurement of the Lamb-shift in muonic hydrogen (Pohl et al., 2010) [1], from which they infer the radius characterizing the proton's charge distribution. The result is 5 standard deviations away from the one of the CODATA compilation of physical constants. This has been interpreted (Pohl et al., 2010) [1] as possibly requiring a 4.9 standard-deviation modification of the Rydberg constant, to a new value that would be precise to 3.3 parts in 1013, as well as putative evidence for physics beyond the standard model (Flowers, 2010) [2]. I demonstrate that these options are unsubstantiated.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
First among Others? Cohen's "d" vs. Alternative Standardized Mean Group Difference Measures
ERIC Educational Resources Information Center
Cahan, Sorel; Gamliel, Eyal
2011-01-01
Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19
ERIC Educational Resources Information Center
Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James
2008-01-01
Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…
Useful Effect Size Interpretations for Single Case Research
ERIC Educational Resources Information Center
Parker, Richard I.; Hagan-Burke, Shanna
2007-01-01
An obstacle to broader acceptability of effect sizes in single case research is their lack of intuitive and useful interpretations. Interpreting Cohen's d as "standard deviation units difference" and R[superscript 2] as "percent of variance accounted for" do not resound with most visual analysts. In fact, the only comparative analysis widely…
Improved Data Reporting in "RQES": From Volumes 49, 59, to 84
ERIC Educational Resources Information Center
Thomas, Jerry R.
2014-01-01
This commentary provides a review of changes in data reporting in "Research Quarterly for Exercise and Sport" from Volumes 49 and 59 to 84. Improvements were noted in that all articles reported means, standard deviations, and sample sizes, while most (87%) articles reported an estimate of effect size ("ES"). Additional…
DOE Office of Scientific and Technical Information (OSTI.GOV)
LIU, B; Zhu, T
Purpose: The dose in the buildup region of a photon beam is usually determined by the transport of the primary secondary electrons and the contaminating electrons from accelerator head. This can be quantified by the electron disequilibrium factor, E, defined as the ratio between total dose and equilibrium dose (proportional to total kerma), E = 1 in regions beyond buildup region. Ecan be different among accelerators of different models and/or manufactures of the same machine. This study compares E in photon beams from different machine models/ Methods: Photon beam data such as fractional depth dose curve (FDD) and phantom scattermore » factors as a function of field size and phantom depth were measured for different Linac machines. E was extrapolated from these fractional depth dose data while taking into account inverse-square law. The ranges of secondary electron were chosen as 3 and 6 cm for 6 and 15 MV photon beams, respectively. The field sizes range from 2x2 to 40x40 cm{sup 2}. Results: The comparison indicates the standard deviations of electron contamination among different machines are about 2.4 - 3.3% at 5 mm depth for 6 MV and 1.2 - 3.9% at 1 cm depth for 15 MV for the same field size. The corresponding maximum deviations are 3.0 - 4.6% and 2 - 4% for 6 and 15 MV, respectively. Both standard and maximum deviations are independent of field sizes in the buildup region for 6 MV photons, and slightly decreasing with increasing field size at depths up to 1 cm for 15 MV photons. Conclusion: The deviations of electron disequilibrium factor for all studied Linacs are less than 3% beyond the depth of 0.5 cm for the photon beams for the full range of field sizes (2-40 cm) so long as they are from the same manufacturer.« less
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.
2014-01-01
Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894
The production of calibration specimens for impact testing of subsize Charpy specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, D.J.; Corwin, W.R.; Owings, T.D.
1994-09-01
Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less
A passive autofocus system by using standard deviation of the image on a liquid lens
NASA Astrophysics Data System (ADS)
Rasti, Pejman; Kesküla, Arko; Haus, Henry; Schlaak, Helmut F.; Anbarjafari, Gholamreza; Aabloo, Alvo; Kiefer, Rudolf
2015-04-01
Today most of applications have a small camera such as cell phones, tablets and medical devices. A micro lens is required in order to reduce the size of the devices. In this paper an auto focus system is used in order to find the best position of a liquid lens without any active components such as ultrasonic or infrared. In fact a passive auto focus system by using standard deviation of the images on a liquid lens which consist of a Dielectric Elastomer Actuator (DEA) membrane between oil and water is proposed.
Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918
Warren, John Robert
2017-01-01
Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749
Zieliński, Tomasz G
2015-04-01
This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.
Quantitative assessment of joint position sense recovery in subacute stroke patients: a pilot study.
Kattenstroth, Jan-Christoph; Kalisch, Tobias; Kowalewski, Rebecca; Tegenthoff, Martin; Dinse, Hubert R
2013-11-01
To assess joint position sense performance in subacute stroke patients using a novel quantitative assessment. Proof-of-principle pilot study with a group of subacute stroke patients. Assessment at baseline and after 2 weeks of intervention. Additional data for a healthy age-matched control group. Ten subacute stroke patients (aged 65.41 years (standard deviation 2.5), 4 females, 2.3 weeks (standard deviation 0.2)) post-stroke receiving in-patient standard rehabilitation and repetitive electrical stimulation of the affected hand. Joint position sense was assessed based on the ability of correctly perceiving the opening angles of the finger joints. Patients had to report size differences of polystyrene balls of various sizes, whilst the balls were enclosed simultaneously by the affected and the non-affected hands. A total of 21 pairwise size comparisons was used to quantify joint position performance. After 2 weeks of therapeutic intervention a significant improvement in joint position sense performance was observed; however, the performance level was still below that of a healthy control group. The results indicate high feasibility and sensitivity of the joint position test in subacute stroke patients. Testing allowed quantification of both the deficit and the rehabilitation outcome.
Frequency of Bolton tooth-size discrepancies among orthodontic patients.
Freeman, J E; Maskeroni, A J; Lorton, L
1996-07-01
The purpose of this study was to determine the percentage of orthodontic patients who present with an interarch tooth-size discrepancy likely to affect treatment planning or results. The Bolton tooth-size discrepancies of 157 patients accepted for treatment in an orthodontic residency program were evaluated for the frequency and the magnitude of deviation from Bolton's mean. Discrepancies outside of 2 SD were considered as potentially significant with regard to treatment planning and treatment results. Although the mean of the sample was nearly identical to that of Bolton's, the range and standard deviation varied considerably with a large percentage of the orthodontic patients having discrepancies outside of Bolton's 2 SD. With such a high frequency of significant discrepancies it would seem prudent to routinely perform a tooth-size analysis and incorporate the findings into orthodontic treatment planning.
Visual field changes after cataract extraction: the AGIS experience.
Koucheki, Behrooz; Nouri-Mahdavi, Kouros; Patel, Gitane; Gaasterland, Douglas; Caprioli, Joseph
2004-12-01
To test the hypothesis that cataract extraction in glaucomatous eyes improves overall sensitivity of visual function without affecting the size or depth of glaucomatous scotomas. Experimental study with no control group. One hundred fifty-eight eyes (of 140 patients) from the Advanced Glaucoma Intervention Study with at least two reliable visual fields within a year both before and after cataract surgery were included. Average mean deviation (MD), pattern standard deviation (PSD), and corrected pattern standard deviation (CPSD) were compared before and after cataract extraction. To evaluate changes in scotoma size, the number of abnormal points (P < .05) on the pattern deviation plot was compared before and after surgery. We described an index ("scotoma depth index") to investigate changes of scotoma depth after surgery. Mean values for MD, PSD, and CPSD were -13.2, 6.4, and 5.9 dB before and -11.9, 6.8, and 6.2 dB after cataract surgery (P < or = .001 for all comparisons). Mean (+/- SD) number of abnormal points on pattern deviation plot was 26.7 +/- 9.4 and 27.5 +/- 9.0 before and after cataract surgery, respectively (P = .02). Scotoma depth index did not change after cataract extraction (-19.3 vs -19.2 dB, P = .90). Cataract extraction caused generalized improvement of the visual field, which was most marked in eyes with less advanced glaucomatous damage. Although the enlargement of scotomas was statistically significant, it was not clinically meaningful. No improvement of sensitivity was observed in the deepest part of the scotomas.
Artmann, L; Larsen, H J; Sørensen, H B; Christensen, I J; Kjaer, I
2010-06-01
To analyze the interrelationship between incisor width, deviations in the dentition and available space in the dental arch in palatally and labially located maxillary ectopic canine cases. Size: On dental casts from 69 patients (mean age 13 years 6 months) the mesiodistal widths of each premolar, canine and incisor were measured and compared with normal standards. Dental deviations: Based on panoramic radiographs from the same patients the dentitions were grouped accordingly: Group I: normal morphology; Group IIa: deviations in the dentition within the maxillary incisors only; Group IIb: deviations in the dentition in general. Descriptive statistics for the tooth sizes and dental deviations were presented by the mean and 95% confidence limits for the mean and the p-value for the T-statistic. Space: Space was expresses by subtracting the total tooth sizes of incisors, canines and premolars from the length of the arch segments. Size of lateral maxillary incisor: The widths of the lateral incisors were significantly different in groups I, IIa and IIb (p=0.016) and in cases with labially located ectopic canines on average 0.65 (95% CI:0.25-1.05, p=0.0019) broader than lateral incisors in cases with palatally located ectopic canines. Space: Least available space was observed in cases with labially located canines. The linear model did show a difference between palatally and labially located ectopic canines (p=0.03). Space related to deviations in the dentition: When space in the dental arch was related to dental deviations (groups I, IIa and IIb), the cases in group IIb with palatally located canines had significantly more space compared with I and IIa. Two subgroups of palatally located ectopic maxillary canine cases based on registration of space, incisor width and deviations in the morphology of the dentition were identified.
Exploring local regularities for 3D object recognition
NASA Astrophysics Data System (ADS)
Tian, Huaiwen; Qin, Shengfeng
2016-11-01
In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.
Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael
2016-01-01
Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.
NASA Astrophysics Data System (ADS)
Jeknić-Dugić, Jasmina; Petrović, Igor; Arsenijević, Momir; Dugić, Miroljub
2018-05-01
We investigate dynamical stability of a single propeller-like shaped molecular cogwheel modelled as the fixed-axis rigid rotator. In the realistic situations, rotation of the finite-size cogwheel is subject to the environmentally-induced Brownian-motion effect that we describe by utilizing the quantum Caldeira-Leggett master equation. Assuming the initially narrow (classical-like) standard deviations for the angle and the angular momentum of the rotator, we investigate the dynamics of the first and second moments depending on the size, i.e. on the number of blades of both the free rotator as well as of the rotator in the external harmonic field. The larger the standard deviations, the less stable (i.e. less predictable) rotation. We detect the absence of the simple and straightforward rules for utilizing the rotator’s stability. Instead, a number of the size-related criteria appear whose combinations may provide the optimal rules for the rotator dynamical stability and possibly control. In the realistic situations, the quantum-mechanical corrections, albeit individually small, may effectively prove non-negligible, and also revealing subtlety of the transition from the quantum to the classical dynamics of the rotator. As to the latter, we detect a strong size-dependence of the transition to the classical dynamics beyond the quantum decoherence process.
Characterizations of particle size distribution of the droplets exhaled by sneeze
Han, Z. Y.; Weng, W. G.; Huang, Q. Y.
2013-01-01
This work focuses on the size distribution of sneeze droplets exhaled immediately at mouth. Twenty healthy subjects participated in the experiment and 44 sneezes were measured by using a laser particle size analyser. Two types of distributions are observed: unimodal and bimodal. For each sneeze, the droplets exhaled at different time in the sneeze duration have the same distribution characteristics with good time stability. The volume-based size distributions of sneeze droplets can be represented by a lognormal distribution function, and the relationship between the distribution parameters and the physiological characteristics of the subjects are studied by using linear regression analysis. The geometric mean of the droplet size of all the subjects is 360.1 µm for unimodal distribution and 74.4 µm for bimodal distribution with geometric standard deviations of 1.5 and 1.7, respectively. For the two peaks of the bimodal distribution, the geometric mean (the geometric standard deviation) is 386.2 µm (1.8) for peak 1 and 72.0 µm (1.5) for peak 2. The influences of the measurement method, the limitations of the instrument, the evaporation effects of the droplets, the differences of biological dynamic mechanism and characteristics between sneeze and other respiratory activities are also discussed. PMID:24026469
NASA Astrophysics Data System (ADS)
Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang
2012-02-01
This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.
Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N
2016-06-01
When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.
Walzer, Andreas; Schausberger, Peter
2014-04-01
The adaptive canalization hypothesis predicts that highly fitness-relevant traits are canalized via past selection, resulting in low phenotypic plasticity and high robustness to environmental stress. Accordingly, we hypothesized that the level of phenotypic plasticity of male body size of the predatory mites Phytoseiulus persimilis (low plasticity) and Neoseiulus californicus (high plasticity) reflects the effects of body size variation on fitness, especially male lifetime reproductive success (LRS). We first generated small and standard-sized males of P. persimilis and N. californicus by rearing them to adulthood under limited and ample prey supply, respectively. Then, adult small and standard-sized males were provided with surplus virgin females throughout life to assess their mating and reproductive traits. Small male body size did not affect male longevity or the number of fertilized females but reduced male LRS of P. persimilis but not N. californicus . Proximately, the lower LRS of small than standard-sized P. persimilis males correlated with shorter mating durations, probably decreasing the amount of transferred sperm. Ultimately, we suggest that male body size is more strongly canalized in P. persimilis than N. californicus because deviation from standard body size has larger detrimental fitness effects in P. persimilis than N. californicus .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
Donegan, Thomas M.
2018-01-01
Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266
Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust
Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin
2015-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881
Consideration of kaolinite interference correction for quartz measurements in coal mine dust.
Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin
2013-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The Rydberg constant and proton size from atomic hydrogen
NASA Astrophysics Data System (ADS)
Beyer, Axel; Maisenbacher, Lothar; Matveev, Arthur; Pohl, Randolf; Khabarova, Ksenia; Grinin, Alexey; Lamour, Tobias; Yost, Dylan C.; Hänsch, Theodor W.; Kolachevsky, Nikolai; Udem, Thomas
2017-10-01
At the core of the “proton radius puzzle” is a four-standard deviation discrepancy between the proton root-mean-square charge radii (rp) determined from the regular hydrogen (H) and the muonic hydrogen (µp) atoms. Using a cryogenic beam of H atoms, we measured the 2S-4P transition frequency in H, yielding the values of the Rydberg constant R∞ = 10973731.568076(96) per meterand rp = 0.8335(95) femtometer. Our rp value is 3.3 combined standard deviations smaller than the previous H world data, but in good agreement with the µp value. We motivate an asymmetric fit function, which eliminates line shifts from quantum interference of neighboring atomic resonances.
Are greenhouse gas emissions and cognitive skills related? Cross-country evidence.
Omanbayev, Bekhzod; Salahodjaev, Raufhon; Lynn, Richard
2018-01-01
Are greenhouse gas emissions (GHG) and cognitive skills (CS) related? We attempt to answer this question by exploring this relationship, using cross-country data for 150 countries, for the period 1997-2012. After controlling for the level of economic development, quality of political regimes, population size and a number of other controls, we document that CS robustly predict GHG. In particular, when CS at a national level increase by one standard deviation, the average annual rate of air pollution changes by nearly 1.7% (slightly less than one half of a standard deviation). This significance holds for a number of robustness checks. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Vasudevamurthy, G.; Byun, T. S.; Pappano, Pete; ...
2015-03-13
Here we present a comparison of the measured baseline mechanical and physical properties of with grain (WG) and against grain (AG) non-ASTM size NBG-18 graphite. The objectives of the experiments were twofold: (1) assess the variation in properties with grain orientation; (2) establish a correlation between specimen tensile strength and size. The tensile strength of the smallest sized (4 mm diameter) specimens were about 5% higher than the standard specimens (12 mm diameter) but still within one standard deviation of the ASTM specimen size indicating no significant dependence of strength on specimen size. The thermal expansion coefficient and elastic constantsmore » did not show significant dependence on specimen size. Lastly, experimental data indicated that the variation of thermal expansion coefficient and elastic constants were still within 5% between the different grain orientations, confirming the isotropic nature of NBG-18 graphite in physical properties.« less
Osei, Ernest; Barnett, Rob
2015-01-01
The aim of this study is to provide guidelines for the selection of external‐beam radiation therapy target margins to compensate for target motion in the lung during treatment planning. A convolution model was employed to predict the effect of target motion on the delivered dose distribution. The accuracy of the model was confirmed with radiochromic film measurements in both static and dynamic phantom modes. 502 unique patient breathing traces were recorded and used to simulate the effect of target motion on a dose distribution. A 1D probability density function (PDF) representing the position of the target throughout the breathing cycle was generated from each breathing trace obtained during 4D CT. Changes in the target D95 (the minimum dose received by 95% of the treatment target) due to target motion were analyzed and shown to correlate with the standard deviation of the PDF. Furthermore, the amount of target D95 recovered per millimeter of increased field width was also shown to correlate with the standard deviation of the PDF. The sensitivity of changes in dose coverage with respect to target size was also determined. Margin selection recommendations that can be used to compensate for loss of target D95 were generated based on the simulation results. These results are discussed in the context of clinical plans. We conclude that, for PDF standard deviations less than 0.4 cm with target sizes greater than 5 cm, little or no additional margins are required. Targets which are smaller than 5 cm with PDF standard deviations larger than 0.4 cm are most susceptible to loss of coverage. The largest additional required margin in this study was determined to be 8 mm. PACS numbers: 87.53.Bn, 87.53.Kn, 87.55.D‐, 87.55.Gh
Measuring lip force by oral screens. Part 1: Importance of screen size and individual variability.
Wertsén, Madeleine; Stenberg, Manne
2017-06-01
To reduce drooling and facilitate food transport in rehabilitation of patients with oral motor dysfunction, lip force can be trained using an oral screen. Longitudinal studies evaluating the effect of training require objective methods. The aim of this study was to evaluate a method for measuring lip strength, to investigate normal values and fluctuation of lip force in healthy adults on 1 occasion and over time, to study how the size of the screen affects the force, to evaluate the most appropriate measure of reliability, and to identify force performed in relation to gender. Three different sizes of oral screens were used to measure the lip force for 24 healthy adults on 3 different occasions, during a period of 6 months, using an apparatus based on strain gauge. The maximum lip force as evaluated with this method depends on the area of the screen size. By calculating the projected area of the screen, the lip force could be normalized to an oral screen pressure quantity expressed in kPa, which can be used for comparing measurements from screens with different sizes. Both the mean value and standard deviation were shown to vary between individuals. The study showed no differences regarding gender and only small variation with age. Normal variation over time (months) may be up to 3 times greater than the standard error of measurement at a certain occasion. The lip force increases in relation to the projected area of the screen. No general standard deviation can be assigned to the method and all measurements should be analyzed individually based on oral screen pressure to compensate for different screen sizes.
ERIC Educational Resources Information Center
Arendasy, Martin E.; Sommer, Markus
2010-01-01
In complex three-dimensional mental rotation tasks males have been reported to score up to one standard deviation higher than females. However, this effect size estimate could be compromised by the presence of gender bias at the item level, which calls the validity of purely quantitative performance comparisons into question. We hypothesized that…
[Online endpoint detection algorithm for blending process of Chinese materia medica].
Lin, Zhao-Zhou; Yang, Chan; Xu, Bing; Shi, Xin-Yuan; Zhang, Zhi-Qiang; Fu, Jing; Qiao, Yan-Jiang
2017-03-01
Blending process, which is an essential part of the pharmaceutical preparation, has a direct influence on the homogeneity and stability of solid dosage forms. With the official release of Guidance for Industry PAT, online process analysis techniques have been more and more reported in the applications in blending process, but the research on endpoint detection algorithm is still in the initial stage. By progressively increasing the window size of moving block standard deviation (MBSD), a novel endpoint detection algorithm was proposed to extend the plain MBSD from off-line scenario to online scenario and used to determine the endpoint in the blending process of Chinese medicine dispensing granules. By online learning of window size tuning, the status changes of the materials in blending process were reflected in the calculation of standard deviation in a real-time manner. The proposed method was separately tested in the blending processes of dextrin and three other extracts of traditional Chinese medicine. All of the results have shown that as compared with traditional MBSD method, the window size changes according to the proposed MBSD method (progressively increasing the window size) could more clearly reflect the status changes of the materials in blending process, so it is suitable for online application. Copyright© by the Chinese Pharmaceutical Association.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
CORRELATION OF FLORIDA SOIL-GAS PERMEABILITIES WITH GRAIN SIZE, MOISTURE, AND POROSITY
The report describes a new correlation or predicting gas permeabilities of undisturbed or recompacted soils from their average grain diameter (d), moisture saturation factor (m), and porosity (p). he correlation exhibits a geometric standard deviation (GSD) of only 1.27 between m...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrich, Jon M.; Rivers, Mark L.; Perlowitz, Michael A.
We show that synchrotron x-ray microtomography ({mu}CT) followed by digital data extraction can be used to examine the size distribution and particle morphologies of the polydisperse (750 to 2450 {micro}m diameter) particle size standard NIST 1019b. Our size distribution results are within errors of certified values with data collected at 19.5 {micro}m/voxel. One of the advantages of using {mu}CT to investigate the particles examined here is that the morphology of the glass beads can be directly examined. We use the shape metrics aspect ratio and sphericity to examine of individual standard beads morphologies as a function of spherical equivalent diameters.more » We find that the majority of standard beads possess near-spherical aspect ratios and sphericities, but deviations are present at the lower end of the size range. The majority (> 98%) of particles also possess an equant form when examined using a common measure of equidimensionality. Although the NIST 1019b standard consists of loose particles, we point out that an advantage of {mu}CT is that coherent materials comprised of particles can be examined without disaggregation.« less
Williams, Rachel E; Arabi, Mazdak; Loftis, Jim; Elmund, G Keith
2014-09-01
Implementation of numeric nutrient standards in Colorado has prompted a need for greater understanding of human impacts on ambient nutrient levels. This study explored the variability of annual nutrient concentrations due to upstream anthropogenic influences and developed a mathematical expression for the number of samples required to estimate median concentrations for standard compliance. A procedure grounded in statistical hypothesis testing was developed to estimate the number of annual samples required at monitoring locations while taking into account the difference between the median concentrations and the water quality standard for a lognormal population. For the Cache La Poudre River in northern Colorado, the relationship between the median and standard deviation of total N (TN) and total P (TP) concentrations and the upstream point and nonpoint concentrations and general hydrologic descriptors was explored using multiple linear regression models. Very strong relationships were evident between the upstream anthropogenic influences and annual medians for TN and TP ( > 0.85, < 0.001) and corresponding standard deviations ( > 0.7, < 0.001). Sample sizes required to demonstrate (non)compliance with the standard depend on the measured water quality conditions. When the median concentration differs from the standard by >20%, few samples are needed to reach a 95% confidence level. When the median is within 20% of the corresponding water quality standard, however, the required sample size increases rapidly, and hundreds of samples may be required. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Laser Pulse Shaping for Low Emittance Photo-Injector
2012-06-01
It depends on the product of the beam’s transverse size and angular divergence, , (I.2) where is the standard deviation of the electron...shows the pendulum’s phase velocity as a function of the position θp. As the pendulum oscillates back and forth, its phase, or angular , velocity and...the angular divergence and size of the optical beam. The radius of the optical beam follows the equation 24 To guarantee proper transfer
Bayesian Estimation Supersedes the "t" Test
ERIC Educational Resources Information Center
Kruschke, John K.
2013-01-01
Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Code of Federal Regulations, 2010 CFR
2010-01-01
... (CONTINUED) ASSISTANCE REGULATIONS THE OFFICE OF ENERGY RESEARCH FINANCIAL ASSISTANCE PROGRAM § 605.15 Fee... which is a small business concern as qualified under the criteria and size standards of 13 CFR part 121... be paid to other entities except as a deviation from 10 CFR part 600, nor shall fees be paid under...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Congwu; Zeng, Grace G.; Department of Radiation Oncology, University of Toronto, Toronto, ON
2014-08-15
We investigated the setup variations over the treatment courses of 113 patients with intact prostate treated with 78Gy/39fx. Institutional standard bladder and bowel preparation and image guidance protocols were used in CT simulation and treatment. The RapidArc treatment plans were optimized in Varian Eclipse treatment planning system and delivered on Varian 2100X Clinacs equipped with On-Board Imager to localize the target before beam-on. The setup variations were calculated in terms of mean and standard deviation of couch shifts. No correlation was observed between the mean shift and standard deviation over the treatment course and patient age, initial prostate volume andmore » rectum size. The mean shifts in the first and last 5 fractions are highly correlated (P < 10{sup −10}) while the correlation of the standard deviations cannot be determined. The Mann-Kendall tests indicate trends of the mean daily Ant-Post and Sup-Inf shifts of the group. The target is inferior by ∼1mm to the planned position when the treatment starts and moves superiorly, approaching the planned position at 10th fraction, and then gradually moves back inferiorly by ∼1mm in the remain fractions. In the Ant-Post direction, the prostate gradually moves posteriorly during the treatment course from a mean shift of ∼2.5mm in the first fraction to ∼1mm in the last fraction. It may be related to a systematic rectum size change in the progress of treatment. The biased mean shifts in Ant-Post and Sup-Inf direction of most patients suggest systematically larger rectum and smaller bladder during the treatment than at CT simulation.« less
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.
Jorge, Marco G; Brennand, Tracy A
2017-01-01
Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.
CT differentiation of 1-2-cm gallbladder polyps: benign vs malignant.
Song, E Rang; Chung, Woo-Suk; Jang, Hye Young; Yoon, Minjae; Cha, Eun Jung
2014-04-01
To evaluate MDCT findings of 1-2-cm sized gallbladder (GB) polyps for differentiation between benign and malignant polyps. Institutional review board approval was obtained, and informed consent was waived. Portal venous phase CT scans of 1-2-cm sized GB polyps caused by various pathologic conditions were retrospectively reviewed by two blinded observers. Among the 36 patients identified, 21 had benign polyps with the remaining 15 having malignant polyps. Size, margin, and shape of GB polyps were evaluated. Attenuation values of the polyps, including mean attenuation, maximum attenuation, and standard deviation, were recorded. As determined by visual inspection, the degree of polyp enhancement was evaluated. Using these CT findings, each of the two radiologists assessed and recorded individual diagnostic confidence for differentiating benign versus malignant polyps on a 5-point scale. The diagnostic performance of CT was evaluated using a receiver operating characteristic curve analysis. There was no significant difference in size between benign and malignant GB polyps. Ill-defined margin and sessile morphology were significantly associated with malignant polyp. There was a significant difference in mean and maximum attenuation values between benign and malignant GB polyps. Mean standard deviation value of malignant polyps was significantly higher than that of benign polyps. All malignant polyps showed either hyperenhancement or marked hyperenhancement. A z value for the diagnosis of malignant GB polyps was 0.905. Margin, shape, and enhancement degree are helpful in differentiating between benign and malignant polyps of 1-2-cm sizes.
Automation of the anthrone assay for carbohydrate concentration determinations.
Turula, Vincent E; Gore, Thomas; Singh, Suddham; Arumugham, Rasappa G
2010-03-01
Reported is the adaptation of a manual polysaccharide assay applicable for glycoconjugate vaccines such as Prevenar to an automated liquid handling system (LHS) for improved performance. The anthrone assay is used for carbohydrate concentration determinations and was scaled to the microtiter plate format with appropriate mixing, dispensing, and measuring operations. Adaptation and development of the LHS platform was performed with both dextran polysaccharides of various sizes and pneumococcal serotype 6A polysaccharide (PnPs 6A). A standard plate configuration was programmed such that the LHS diluted both calibration standards and a test sample multiple times with six replicate preparations per dilution. This extent of replication minimized the effect of any single deviation or delivery error that might have occurred. Analysis of the dextran polymers ranging in size from 214 kDa to 3.755 MDa showed that regardless of polymer chain length the hydrolysis was complete, as evident by uniform concentration measurements. No plate positional absorbance bias was observed; of 12 plates analyzed to examine positional bias the largest deviation observed was 0.02% percent relative standard deviation (%RSD). The high purity dextran also afforded the opportunity to assess LHS accuracy; nine replicate analyses of dextran yielded a mean accuracy of 101% recovery. As for precision, a total of 22 unique analyses were performed on a single lot of PnPs 6A, and the resulting variability was 2.5% RSD. This work demonstrated the capability of a LHS to perform the anthrone assay consistently and a reduced assay cycle time for greater laboratory capacity.
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
Long-term reproducibility of relative sensitivity factors obtained with CAMECA Wf
NASA Astrophysics Data System (ADS)
Gui, D.; Xing, Z. X.; Huang, Y. H.; Mo, Z. Q.; Hua, Y. N.; Zhao, S. P.; Cha, L. Z.
2008-12-01
As the wafer size continues to increase and the feature size of the integrated circuits (IC) continues to shrink, process control of IC manufacturing becomes ever more important to reduce the cost of failures caused by the drift of processes or equipments. Characterization tools with high precision and reproducibility are required to capture any abnormality of the process. Although Secondary ion mass spectrometry (SIMS) has been widely used in dopant profile control, it was reported that magnetic sector SIMS, compared to quadrupole SIMS, has lower short-term repeatability and long-term reproducibility due to the high extraction field applied between sample and extraction lens. In this paper, we demonstrate that CAMECA Wf can deliver high long-term reproducibility because of its high-level automation and improved design of immersion lens. The relative standard deviation (R.S.D.) of the relative sensitivity factors (RSF) of three typical elements, i.e., boron (B), phosphorous (P) and nitrogen (N), over 3 years are 3.7%, 5.5% and 4.1%, respectively. The high reproducibility results have a practical implication that deviation can be estimated without testing the standards.
Zhou, Tianyu; Ding, Jie; Wang, Qiang; Xu, Yuan; Wang, Bo; Zhao, Li; Ding, Hong; Chen, Yanhua; Ding, Lan
2018-03-01
Monodisperse superhydrophilic melamine formaldehyde resorcinol resin (MFR) microspheres were prepared in 90min at 85°C via a microwave-assisted method with a yield of 60.6%. The obtained MFR microspheres exhibited narrow size distribution with the average particle size of about 2.5µm. The MFR microspheres were used as absorbents to detect triazines in juices followed by high performance liquid chromatography tandem mass spectrometry. Various factors affecting the extraction efficiency were investigated. Under the optimized conditions, the built method exhibited excellent linearity in the range of 1-250μgL -1 (R 2 ≥ 0.9994) and lower detection limits (0.3-0.65μgL -1 ). The relative standard deviations of intra- and inter-day analyses ranged from 3% to 7% and from 2% to 7%, respectively. The method was applied to determine six triazines in three juice samples. At the spiked level of 3μgL -1 , the recoveries were in the range of 90-99% with the relative standard deviations ≤ 8%. Copyright © 2017 Elsevier B.V. All rights reserved.
New Evidence on the Relationship Between Climate and Conflict
NASA Astrophysics Data System (ADS)
Burke, M.
2015-12-01
We synthesize a large new body of research on the relationship between climate and conflict. We consider many types of human conflict, ranging from interpersonal conflict -- domestic violence, road rage, assault, murder, and rape -- to intergroup conflict -- riots, coups, ethnic violence, land invasions, gang violence, and civil war. After harmonizing statistical specifications and standardizing estimated effect sizes within each conflict category, we implement a meta-analysis that allows us to estimate the mean effect of climate variation on conflict outcomes as well as quantify the degree of variability in this effect size across studies. Looking across more than 50 studies, we find that deviations from moderate temperatures and precipitation patterns systematically increase the risk of conflict, often substantially, with average effects that are highly statistically significant. We find that contemporaneous temperature has the largest average effect by far, with each 1 standard deviation increase toward warmer temperatures increasing the frequency of contemporaneous interpersonal conflict by 2% and of intergroup conflict by more than 10%. We also quantify substantial heterogeneity in these effect estimates across settings.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Accounting for body size deviations when reporting bone mineral density variables in children.
Webber, C E; Sala, A; Barr, R D
2009-01-01
In a child, bone mineral density (BMD) may differ from an age-expected normal value, not only because of the presence of disease, but also because of deviations of height or weight from population averages. Appropriate adjustment for body size deviations simplifies interpretation of BMD measurements. For children, a bone mineral density (BMD) measurement is normally expressed as a Z score. Interpretation is complicated when weight or height distinctly differ from age-matched children. We develop a procedure to allow for the influence of body size deviations upon measured BMD. We examined the relation between body size deviation and spine, hip and whole body BMD deviation in 179 normal children (91 girls). Expressions were developed that allowed derivation of an expected BMD based on age, gender and body size deviation. The difference between measured and expected BMD was expressed as a HAW score (Height-, Age-, Weight-adjusted score). In a second independent sample of 26 normal children (14 girls), measured spine, total femur and whole body BMD all fell within the same single normal range after accounting for age, gender and body size deviations. When traditional Z scores and HAW scores were compared in 154 children, 17.5% showed differences of more than 1 unit and such differences were associated with height and weight deviations. For almost 1 in 5 children, body size deviations influence BMD to an extent that could alter clinical management.
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
[Biomechanical significance of the acetabular roof and its reaction to mechanical injury].
Domazet, N; Starović, D; Nedeljković, R
1999-01-01
The introduction of morphometry into the quantitative analysis of the bone system and functional adaptation of acetabulum to mechanical damages and injuries enabled a relatively simple and acceptable examination of morphological acetabular changes in patients with damaged hip joints. Measurements of the depth and form of acetabulum can be done by radiological methods, computerized tomography and ultrasound (1-9). The aim of the study was to obtain data on the behaviour of acetabular roof, the so-called "eyebrow", by morphometric analyses during different mechanical injuries. Clinical studies of the effect of different loads on acetabular roof were carried out in 741 patients. Radiographic findings of 400 men and 341 women were analysed. The control group was composed of 148 patients with normal hip joints. Average age of the patients was 54.7 years and that of control subjects 52.0 years. Data processing was done for all examined patients. On the basis of our measurements the average size of female "eyebrow" ranged from 24.8 mm to 31.5 mm with standard deviation of 0.93 and in men from 29.4 mm to 40.3 mm with standard deviation of 1.54. The average size in the whole population was 32.1 mm with standard deviation of 15.61. Statistical analyses revealed high correlation coefficients between the age and "eyebrow" size in men (r = 0.124; p < 0.05); it was statically in inverse proportion (Graph 1). However, in female patients the correlation coefficient was statistically significant (r = 0.060; p > 0.05). The examination of the size of collodiaphysial angle and length of "eyebrow" revealed that "eyebrow" length was in inverse proportion to the size of collodiaphysial angle (r = 0.113; p < 0.05). The average "eyebrow" length in relation to the size of collodiaphysial angle ranged from 21.3 mm to 35.2 mm with standard deviation of 1.60. There was no statistically significant correlation between the "eyebrow" size and Wiberg's angle in male (r = 0.049; p > 0.05) and female (r = 0.005; p > 0.05) patients. The "eyebrow" length was proportionally dependent on the size of the shortened extremity in all examined subjects. This dependence was statistically significant both in female (r = 0.208; p < 0.05) and male (r = 0.193; p < 0.05) patients. The study revealed that fossa acetabuli was forward and downward laterally directed. The size, form and cross-section of acetabulum changed during different loads. Dimensions and morphological changes in acetabulum showed some but unimportant changes in comparison to that in the control group. These findings are graphically presented in Figure 5 and numerically in Tables 1 and 2. The study of spatial orientation among hip joints revealed that fossa acetabuli was forward and downward laterally directed; this was in accordance with results other authors (1, 7, 9, 15, 18). There was a statistically significant difference in relation to the "eyebrow" size between patients and normal subjects (t = 3.88; p < 0.05). The average difference of "eyebrow" size was 6.892 mm. A larger "eyebrow" was found in patients with normally loaded hip. There was also a significant difference in "eyebrow" size between patients and healthy female subjects (t = 4.605; p < 0.05). A larger "eyebrow" of 8.79 mm was found in female subjects with normally loaded hip. On the basis of our study it can be concluded that the findings related to changes in acetabular roof, the so-called "eyebrow", are important in diagnosis, follow-up and therapy of pathogenetic processes of these disorders.
Implementation of small field radiotherapy dosimetry for spinal metastase case
NASA Astrophysics Data System (ADS)
Rofikoh, Wibowo, W. E.; Pawiro, S. A.
2017-07-01
The main objective of this study was to know dose profile of small field radiotherapy in the spinal metastase case with source axis distance (SAD) techniques. In addition, we evaluated and compared the dose planning of stereotactic body radiation therapy (SBRT) and conventional techniques to measurements with Exradin A16 and Gafchromic EBT3 film dosimeters. The results showed that film EBT3 had a highest precision and accuracy with the average of the standard deviation of ±1.7 and maximum discrepancy of 2.6 %. In addition, the average value of Full Wave Half Maximum (FWHM) and its largest deviation in small field size of 0.8 x 0.8 cm2 are 0.82 cm and 16.3 % respectively, while it was found around 2.36 cm and 3 % for the field size of 2.4 x 2.4 cm2. The comparison between penumbra width and the collimation was around of 37.1 % for the field size of 0.8 x 0.8 cm2, while it was found of 12.4 % for the field size of 2.4 x 2.4 cm2.
Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538
Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups
ERIC Educational Resources Information Center
Pero-Cebollero, Maribel; Guardia-Olmos, Joan
2013-01-01
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…
Walzer, Andreas; Schausberger, Peter
2014-01-01
The adaptive canalization hypothesis predicts that highly fitness-relevant traits are canalized via past selection, resulting in low phenotypic plasticity and high robustness to environmental stress. Accordingly, we hypothesized that the level of phenotypic plasticity of male body size of the predatory mites Phytoseiulus persimilis (low plasticity) and Neoseiulus californicus (high plasticity) reflects the effects of body size variation on fitness, especially male lifetime reproductive success (LRS). We first generated small and standard-sized males of P. persimilis and N. californicus by rearing them to adulthood under limited and ample prey supply, respectively. Then, adult small and standard-sized males were provided with surplus virgin females throughout life to assess their mating and reproductive traits. Small male body size did not affect male longevity or the number of fertilized females but reduced male LRS of P. persimilis but not N. californicus. Proximately, the lower LRS of small than standard-sized P. persimilis males correlated with shorter mating durations, probably decreasing the amount of transferred sperm. Ultimately, we suggest that male body size is more strongly canalized in P. persimilis than N. californicus because deviation from standard body size has larger detrimental fitness effects in P. persimilis than N. californicus. © 2014 The Authors. Biological Journal of the Linnean Society published by John Wiley & Sons Ltd on behalf of The Linnean Society of London, Biological Journal of the Linnean Society, 2014, 111, 889–899. PMID:25132689
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images
NASA Astrophysics Data System (ADS)
Sohrabi, H.
2012-07-01
In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.
Effectiveness of various innovative learning methods in health science classrooms: a meta-analysis.
Kalaian, Sema A; Kasim, Rafa M
2017-12-01
This study reports the results of a meta-analysis of the available literature on the effectiveness of various forms of innovative small-group learning methods on student achievement in undergraduate college health science classrooms. The results of the analysis revealed that most of the primary studies supported the effectiveness of the small-group learning methods in improving students' academic achievement with an overall weighted average effect-size of 0.59 in standard deviation units favoring small-group learning methods. The subgroup analysis showed that the various forms of innovative and reform-based small-group learning interventions appeared to be significantly more effective for students in higher levels of college classes (sophomore, junior, and senior levels), students in other countries (non-U.S.) worldwide, students in groups of four or less, and students who choose their own group. The random-effects meta-regression results revealed that the effect sizes were influenced significantly by the instructional duration of the primary studies. This means that studies with longer hours of instruction yielded higher effect sizes and on average every 1 h increase in instruction, the predicted increase in effect size was 0.009 standard deviation units, which is considered as a small effect. These results may help health science and nursing educators by providing guidance in identifying the conditions under which various forms of innovative small-group learning pedagogies are collectively more effective than the traditional lecture-based teaching instruction.
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
Skewness and kurtosis analysis for non-Gaussian distributions
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2018-06-01
In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.
A scattering methodology for droplet sizing of e-cigarette aerosols.
Pratte, Pascal; Cosandey, Stéphane; Goujon-Ginglinger, Catherine
2016-10-01
Knowledge of the droplet size distribution of inhalable aerosols is important to predict aerosol deposition yield at various respiratory tract locations in human. Optical methodologies are usually preferred over the multi-stage cascade impactor for high-throughput measurements of aerosol particle/droplet size distributions. Evaluate the Laser Aerosol Spectrometer technology based on Polystyrene Sphere Latex (PSL) calibration curve applied for the experimental determination of droplet size distributions in the diameter range typical of commercial e-cigarette aerosols (147-1361 nm). This calibration procedure was tested for a TSI Laser Aerosol Spectrometer (LAS) operating at a wavelength of 633 nm and assessed against model di-ethyl-hexyl-sebacat (DEHS) droplets and e-cigarette aerosols. The PSL size response was measured, and intra- and between-day standard deviations calculated. DEHS droplet sizes were underestimated by 15-20% by the LAS when the PSL calibration curve was used; however, the intra- and between-day relative standard deviations were < 3%. This bias is attributed to the fact that the index of refraction of PSL calibrated particles is different in comparison to test aerosols. This 15-20% does not include the droplet evaporation component, which may reduce droplet size prior a measurement is performed. Aerosol concentration was measured accurately with a maximum uncertainty of 20%. Count median diameters and mass median aerodynamic diameters of selected e-cigarette aerosols ranged from 130-191 nm to 225-293 nm, respectively, similar to published values. The LAS instrument can be used to measure e-cigarette aerosol droplet size distributions with a bias underestimating the expected value by 15-20% when using a precise PSL calibration curve. Controlled variability of DEHS size measurements can be achieved with the LAS system; however, this method can only be applied to test aerosols having a refractive index close to that of PSL particles used for calibration.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Stopping characteristics of boron and indium ions in silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselov, D. S., E-mail: DSVeselov@mephi.ru; Voronov, Yu. A.
2016-12-15
The mean range and its standard deviation are calculated for boron ions implanted into silicon with energies below 10 keV. Similar characteristics are calculated for indium ions with energies below 200 keV. The obtained results are presented in tabular and graphical forms. These results may help in the assessment of conditions of production of integrated circuits with nanometer-sized elements.
A Comparison of Financial Literacy between Native and Immigrant School Students
ERIC Educational Resources Information Center
Grama?ki, Iulian
2017-01-01
This paper investigates the gap in Financial Literacy (FL) between native and immigrant 15-year-old school students using data from the 2012 PISA Financial Literacy Assessment. The size of the gap is about 0.15 standard deviations, going up to 0.3 for first-generation immigrants. This is partly because immigrants have poorer economic background,…
Huang, Emily; Chern, Hueylan; O'Sullivan, Patricia; Cook, Brian; McDonald, Erik; Palmer, Barnard; Liu, Terrence; Kim, Edward
2014-10-01
Knot tying is a fundamental and crucial surgical skill. We developed a kinesthetic pedagogical approach that increases precision and economy of motion by explicitly teaching suture-handling maneuvers and studied its effects on novice performance. Seventy-four first-year medical students were randomized to learn knot tying via either the traditional or the novel "kinesthetic" method. After 1 week of independent practice, students were videotaped performing 4 tying tasks. Three raters scored deidentified videos using a validated visual analog scale. The groups were compared using analysis of covariance with practice knots as a covariate and visual analog scale score (range, 0 to 100) as the dependent variable. Partial eta-square was calculated to indicate effect size. Overall rater reliability was .92. The kinesthetic group scored significantly higher than the traditional group for individual tasks and overall, controlling for practice (all P < .004). The kinesthetic overall mean was 64.15 (standard deviation = 16.72) vs traditional 46.31 (standard deviation = 16.20; P < .001; effect size = .28). For novices, emphasizing kinesthetic suture handling substantively improved performance on knot tying. We believe this effect can be extrapolated to more complex surgical skills. Copyright © 2014 Elsevier Inc. All rights reserved.
Topological inflation with graceful exit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marunović, Anja; Prokopec, Tomislav, E-mail: a.marunovic@uu.nl, E-mail: t.prokopec@uu.nl
We investigate a class of models of topological inflation in which a super-Hubble-sized global monopole seeds inflation. These models are attractive since inflation starts from rather generic initial conditions, but their not so attractive feature is that, unless symmetry is again restored, inflation never ends. In this work we show that, in presence of another nonminimally coupled scalar field, that is both quadratically and quartically coupled to the Ricci scalar, inflation naturally ends, representing an elegant solution to the graceful exit problem of topological inflation. While the monopole core grows during inflation, the growth stops after inflation, such that themore » monopole eventually enters the Hubble radius, and shrinks to its Minkowski space size, rendering it immaterial for the subsequent Universe's dynamics. Furthermore, we find that our model can produce cosmological perturbations that source CMB temperature fluctuations and seed large scale structure statistically consistent (within one standard deviation) with all available data. In particular, for small and (in our convention) negative nonminimal couplings, the scalar spectral index can be as large as n {sub s} ≅ 0.955, which is about one standard deviation lower than the central value quoted by the most recent Planck Collaboration.« less
SU-E-J-188: Theoretical Estimation of Margin Necessary for Markerless Motion Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, R; Block, A; Harkenrider, M
2015-06-15
Purpose: To estimate the margin necessary to adequately cover the target using markerless motion tracking (MMT) of lung lesions given the uncertainty in tracking and the size of the target. Methods: Simulations were developed in Matlab to determine the effect of tumor size and tracking uncertainty on the margin necessary to achieve adequate coverage of the target. For simplicity, the lung tumor was approximated by a circle on a 2D radiograph. The tumor was varied in size from a diameter of 0.1 − 30 mm in increments of 0.1 mm. From our previous studies using dual energy markerless motion tracking,more » we estimated tracking uncertainties in x and y to have a standard deviation of 2 mm. A Gaussian was used to simulate the deviation between the tracked location and true target location. For each size tumor, 100,000 deviations were randomly generated, the margin necessary to achieve at least 95% coverage 95% of the time was recorded. Additional simulations were run for varying uncertainties to demonstrate the effect of the tracking accuracy on the margin size. Results: The simulations showed an inverse relationship between tumor size and margin necessary to achieve 95% coverage 95% of the time using the MMT technique. The margin decreased exponentially with target size. An increase in tracking accuracy expectedly showed a decrease in margin size as well. Conclusion: In our clinic a 5 mm expansion of the internal target volume (ITV) is used to define the planning target volume (PTV). These simulations show that for tracking accuracies in x and y better than 2 mm, the margin required is less than 5 mm. This simple simulation can provide physicians with a guideline estimation for the margin necessary for use of MMT clinically based on the accuracy of their tracking and the size of the tumor.« less
A comparative appraisal of two equivalence tests for multiple standardized effects.
Shieh, Gwowen
2016-04-01
Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Properties of Smoke from Overheated Materials in Low-Gravity
NASA Technical Reports Server (NTRS)
Urban, David L.; Ruff, Gary A.; Sheredy, William; Cleary, Thomas; Yang, Jiann; Mulholland, George; Yuan, Zeng-Guang
2009-01-01
Smoke particle size measurements were obtained under low-gravity conditions by overheating several materials typical of those found in spacecraft. The measurements included integral measurements of the smoke particles and physical sample of the particles for Transmission Electron Microscope analysis. The integral moments were combined to obtain geometric mean particle sizes and geometric standard deviations. These results are presented with the details of the instrument calibrations. The experimental results show that, for the materials tested, a substantial portion of the smoke particles are below 500 nm in diameter.
A Model Independent General Search for new physics in ATLAS
NASA Astrophysics Data System (ADS)
Amoroso, S.; ATLAS Collaboration
2016-04-01
We present results of a model-independent general search for new phenomena in proton-proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS detector at the LHC. The data set corresponds to a total integrated luminosity of 20.3 fb-1. Event topologies involving isolated electrons, photons and muons, as well as jets, including those identified as originating from b-quarks (b-jets) and missing transverse momentum are investigated. The events are subdivided according to their final states into exclusive event classes. For the 697 classes with a Standard Model expectation greater than 0.1 events, a search algorithm tests the compatibility of data against the Monte Carlo simulated background in three kinematic variables sensitive to new physics effects. No significant deviation is found in data. The number and size of the observed deviations follow the Standard Model expectation obtained from simulated pseudo-experiments.
New device for accurate measurement of the x-ray intensity distribution of x-ray tube focal spots.
Doi, K; Fromes, B; Rossmann, K
1975-01-01
A new device has been developed with which the focal spot distribution can be measured accurately. The alignment and localization of the focal spot relative to the device are accomplished by adjustment of three micrometer screws in three orthogonal directions and by comparison of red reference light spots with green fluorescent pinhole images at five locations. The standard deviations for evaluating the reproducibility of the adjustments in the horizontal and vertical directions were 0.2 and 0.5 mm, respectively. Measurements were made of the pinhole images as well as of the line-spread functions (LSFs) and modulation transfer functions (MTFs) for an x-ray tube with focal spots of 1-mm and 50-mum nominal size. The standard deviations for the LSF and MTF of the 1-mm focal spot were 0.017 and 0.010, respectively.
On the Use of a Range Trigger for the Mars Science Laboratory Entry Descent and Landing
NASA Technical Reports Server (NTRS)
Way, David W.
2011-01-01
In 2012, during the Entry, Descent, and Landing (EDL) of the Mars Science Laboratory (MSL) entry vehicle, a 21.5 m Viking-heritage, Disk-Gap-Band, supersonic parachute will be deployed at approximately Mach 2. The baseline algorithm for commanding this parachute deployment is a navigated planet-relative velocity trigger. This paper compares the performance of an alternative range-to-go trigger (sometimes referred to as Smart Chute ), which can significantly reduce the landing footprint size. Numerical Monte Carlo results, predicted by the POST2 MSL POST End-to-End EDL simulation, are corroborated and explained by applying propagation of uncertainty methods to develop an analytic estimate for the standard deviation of Mach number. A negative correlation is shown to exist between the standard deviations of wind velocity and the planet-relative velocity at parachute deploy, which mitigates the Mach number rise in the case of the range trigger.
Experimental comparison of icing cloud instruments
NASA Technical Reports Server (NTRS)
Olsen, W.; Takeuchi, D. M.; Adams, K.
1983-01-01
Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
Development and validation of age-dependent FE human models of a mid-sized male thorax.
El-Jawahri, Raed E; Laituri, Tony R; Ruan, Jesse S; Rouhana, Stephen W; Barbat, Saeed D
2010-11-01
The increasing number of people over 65 years old (YO) is an important research topic in the area of impact biomechanics, and finite element (FE) modeling can provide valuable support for related research. There were three objectives of this study: (1) Estimation of the representative age of the previously-documented Ford Human Body Model (FHBM) -- an FE model which approximates the geometry and mass of a mid-sized male, (2) Development of FE models representing two additional ages, and (3) Validation of the resulting three models to the extent possible with respect to available physical tests. Specifically, the geometry of the model was compared to published data relating rib angles to age, and the mechanical properties of different simulated tissues were compared to a number of published aging functions. The FHBM was determined to represent a 53-59 YO mid-sized male. The aforementioned aging functions were used to develop FE models representing two additional ages: 35 and 75 YO. The rib model was validated against human rib specimens and whole rib tests, under different loading conditions, with and without modeled fracture. In addition, the resulting three age-dependent models were validated by simulating cadaveric tests of blunt and sled impacts. The responses of the models, in general, were within the cadaveric response corridors. When compared to peak responses from individual cadavers similar in size and age to the age-dependent models, some responses were within one standard deviation of the test data. All the other responses, but one, were within two standard deviations.
Combined plasma gas-phase synthesis and colloidal processing of InP/ZnS core/shell nanocrystals.
Gresback, Ryan; Hue, Ryan; Gladfelter, Wayne L; Kortshagen, Uwe R
2011-01-12
Indium phosphide nanocrystals (InP NCs) with diameters ranging from 2 to 5 nm were synthesized with a scalable, flow-through, nonthermal plasma process at a rate ranging from 10 to 40 mg/h. The NC size is controlled through the plasma operating parameters, with the residence time of the gas in the plasma region strongly influencing the NC size. The NC size distribution is narrow with the standard deviation being less than 20% of the mean NC size. Zinc sulfide (ZnS) shells were grown around the plasma-synthesized InP NCs in a liquid phase reaction. Photoluminescence with quantum yields as high as 15% were observed for the InP/ZnS core-shell NCs.
Combined plasma gas-phase synthesis and colloidal processing of InP/ZnS core/shell nanocrystals
2011-01-01
Indium phosphide nanocrystals (InP NCs) with diameters ranging from 2 to 5 nm were synthesized with a scalable, flow-through, nonthermal plasma process at a rate ranging from 10 to 40 mg/h. The NC size is controlled through the plasma operating parameters, with the residence time of the gas in the plasma region strongly influencing the NC size. The NC size distribution is narrow with the standard deviation being less than 20% of the mean NC size. Zinc sulfide (ZnS) shells were grown around the plasma-synthesized InP NCs in a liquid phase reaction. Photoluminescence with quantum yields as high as 15% were observed for the InP/ZnS core-shell NCs. PMID:21711589
Combined plasma gas-phase synthesis and colloidal processing of InP/ZnS core/shell nanocrystals
NASA Astrophysics Data System (ADS)
Gresback, Ryan; Hue, Ryan; Gladfelter, Wayne L.; Kortshagen, Uwe R.
2011-12-01
Indium phosphide nanocrystals (InP NCs) with diameters ranging from 2 to 5 nm were synthesized with a scalable, flow-through, nonthermal plasma process at a rate ranging from 10 to 40 mg/h. The NC size is controlled through the plasma operating parameters, with the residence time of the gas in the plasma region strongly influencing the NC size. The NC size distribution is narrow with the standard deviation being less than 20% of the mean NC size. Zinc sulfide (ZnS) shells were grown around the plasma-synthesized InP NCs in a liquid phase reaction. Photoluminescence with quantum yields as high as 15% were observed for the InP/ZnS core-shell NCs.
Evaluation of methods for measuring particulate matter emissions from gas turbines.
Petzold, Andreas; Marsh, Richard; Johnson, Mark; Miller, Michael; Sevcenco, Yura; Delhaye, David; Ibrahim, Amir; Williams, Paul; Bauer, Heidi; Crayford, Andrew; Bachalo, William D; Raper, David
2011-04-15
The project SAMPLE evaluated methods for measuring particle properties in the exhaust of aircraft engines with respect to the development of standardized operation procedures for particulate matter measurement in aviation industry. Filter-based off-line mass methods included gravimetry and chemical analysis of carbonaceous species by combustion methods. Online mass methods were based on light absorption measurement or used size distribution measurements obtained from an electrical mobility analyzer approach. Number concentrations were determined using different condensation particle counters (CPC). Total mass from filter-based methods balanced gravimetric mass within 8% error. Carbonaceous matter accounted for 70% of gravimetric mass while the remaining 30% were attributed to hydrated sulfate and noncarbonaceous organic matter fractions. Online methods were closely correlated over the entire range of emission levels studied in the tests. Elemental carbon from combustion methods and black carbon from optical methods deviated by maximum 5% with respect to mass for low to medium emission levels, whereas for high emission levels a systematic deviation between online methods and filter based methods was found which is attributed to sampling effects. CPC based instruments proved highly reproducible for number concentration measurements with a maximum interinstrument standard deviation of 7.5%.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Stoliker, Deborah L.; Liu, Chongxuan; Kent, Douglas B.; Zachara, John M.
2013-01-01
Rates of U(VI) release from individual dry-sieved size fractions of a field-aggregated, field-contaminated composite sediment from the seasonally saturated lower vadose zone of the Hanford 300-Area were examined in flow-through reactors to maintain quasi-constant chemical conditions. The principal source of variability in equilibrium U(VI) adsorption properties of the various size fractions was the impact of variable chemistry on adsorption. This source of variability was represented using surface complexation models (SCMs) with different stoichiometric coefficients with respect to hydrogen ion and carbonate concentrations for the different size fractions. A reactive transport model incorporating equilibrium expressions for cation exchange and calcite dissolution, along with rate expressions for aerobic respiration and silica dissolution, described the temporal evolution of solute concentrations observed during the flow-through reactor experiments. Kinetic U(VI) desorption was well described using a multirate SCM with an assumed lognormal distribution for the mass-transfer rate coefficients. The estimated mean and standard deviation of the rate coefficients were the same for all <2 mm size fractions but differed for the 2–8 mm size fraction. Micropore volumes, assessed using t-plots to analyze N2 desorption data, were also the same for all dry-sieved <2 mm size fractions, indicating a link between micropore volumes and mass-transfer rate properties. Pore volumes for dry-sieved size fractions exceeded values for the corresponding wet-sieved fractions. We hypothesize that repeated field wetting and drying cycles lead to the formation of aggregates and/or coatings containing (micro)pore networks which provided an additional mass-transfer resistance over that associated with individual particles. The 2–8 mm fraction exhibited a larger average and standard deviation in the distribution of mass-transfer rate coefficients, possibly caused by the abundance of microporous basaltic rock fragments.
Multiple-wavelength transmission measurements in rocket motor plumes
NASA Astrophysics Data System (ADS)
Kim, Hong-On
1991-09-01
Multiple-wavelength light transmission measurements were used to measure the mean particle size (d(sub 32)), index of refraction (m), and standard deviation of the small particles in the edge of the plume of a small solid propellant rocket motor. The results have shown that the multiple-wavelength light transmission measurement technique can be used to obtain these variables. The technique was shown to be more sensitive to changes in d(sub 32) and standard deviation (sigma) than to m. A GAP/AP/4.7 percent aluminum propellant burned at 25 atm produced particles with d32 = 0.150 +/- 0.006 microns, standard deviation = 1.50 +/- 0.04 and m = 1.63 +/- 0.13. The good correlation of the data indicated that only submicron particles were present in the edge of the plume. In today's budget conscious industry, the solid propellant rocket motor is an ideal propulsion system due to its low cost and simplicity. The major obstacle for solid rocket motors, however, is their limited specific impulse compared to airbreathing motors. One way to help overcome this limitation is to utilize metal fuel additives. Solid propellant rocket motors can achieve high specific impulse with metal fuel additives such as aluminum. Aluminum propellants also increase propellant densities and suppress transverse modes of combustion oscillations by damping the oscillations with the aluminum agglomerates in the combustion chamber.
Evaluation of Small-Sized Platinum Resistance Thermometers with ITS-90 Characteristics
NASA Astrophysics Data System (ADS)
Yamazawa, K.; Anso, K.; Widiatmo, J. V.; Tamba, J.; Arai, M.
2011-12-01
Many platinum resistance thermometers (PRTs) are applied for high precision temperature measurements in industry. Most of the applications use PRTs that follow the industrial standard of PRTs, IEC 60751. However, recently, some applications, such as measurements of the temperature distribution within equipments, require a more precise temperature scale at the 0.01 °C level. In this article the evaluation of remarkably small-sized PRTs that have temperature-resistance characteristics very close to that of standard PRTs of the International Temperature Scale of 1990 (ITS-90) is reported. Two types of the sensing element were tested, one is 1.2 mm in diameter and 10 mm long, the other is 0.8 mm and 8 mm. The resistance of the sensor is 100 Ω at the triple-point-of-water temperature. The resistance ratio at the Ga melting-point temperature of the sensing elements exceeds 1.11807. To verify the closeness of the temperature-resistance characteristics, comparison measurements up to 157 °C were employed. A pressure-controlled water heat-pipe furnace was used for the comparison measurement. Characteristics of 19 thermometers with these small-sized sensing elements were evaluated. The deviation from the temperature measured using a standard PRT used as a reference thermometer in the comparison was remarkably small, when we apply the same interpolating function for the ITS-90 sub-range to these small thermometers. Results including the stability of the PRTs and the uncertainty evaluation of the comparison measurements, and the comparison results showing the small deviation from the ITS-90 temperature-resistance characteristics are reported. The development of such a PRT might be a good solution for applications such as temperature measurements of small objects or temperature distribution measurements that need the ITS-90 temperature scale.
Monitor unit settings for intensity modulated beams delivered using a step-and-shoot approach.
Sharpe, M B; Miller, B M; Yan, D; Wong, J W
2000-12-01
Two linear accelerators have been commissioned for delivering IMRT treatments using a step-and-shoot approach. To assess beam startup stability for 6 and 18 MV x-ray beams, dose delivered per monitor unit (MU), beam flatness, and beam symmetry were measured as a function of the total number of MU delivered at a clinical dose rate of 400 MU per minute. Relative to a 100 MU exposure, the dose delivered per MU by both linear accelerators was found to be within +/-2% for exposures larger than 4 MU. Beam flatness and symmetry also met accepted quality assurance standards for a minimum exposure of 4 MU. We have found that the performance of the two machines under study is well suited to the delivery of step-and-shoot IMRT. A system of dose calculation has also been commissioned for applying head scatter corrections to fields as small as 1x1 cm2. The accuracy and precision of the relative output calculations in water was validated for small fields and fields offset from the axis of collimator rotation. For both 6 and 18 MV x-ray beams, the dose per MU calculated in a water phantom agrees with measured data to within 1% on average, with a maximum deviation of 2.5%. The largest output factor discrepancies were seen when the actual radiation field size deviated from the set field size. The measured output in water can vary by as much 16% for 1x1 cm2 fields, when the measured field size deviates from the set field size by 2 mm. For a 1 mm deviation, this discrepancy was reduced to 8%. Steps should be taken to ensure collimator precision is tightly controlled when using such small fields. If this is not possible, very small fields should not contribute to a significant portion of the treatment, or uncertainties in the collimator position may effect the accuracy of the dose delivered.
Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion
NASA Astrophysics Data System (ADS)
Lazarescu, Alexandre
2017-06-01
Dynamical phase transitions are crucial features of the fluctuations of statistical systems, corresponding to boundaries between qualitatively different mechanisms of maintaining unlikely values of dynamical observables over long periods of time. They manifest themselves in the form of non-analyticities in the large deviation function of those observables. In this paper, we look at bulk-driven exclusion processes with open boundaries. It is known that the standard asymmetric simple exclusion process exhibits a dynamical phase transition in the large deviations of the current of particles flowing through it. That phase transition has been described thanks to specific calculation methods relying on the model being exactly solvable, but more general methods have also been used to describe the extreme large deviations of that current, far from the phase transition. We extend those methods to a large class of models based on the ASEP, where we add arbitrary spatial inhomogeneities in the rates and short-range potentials between the particles. We show that, as for the regular ASEP, the large deviation function of the current scales differently with the size of the system if one considers very high or very low currents, pointing to the existence of a dynamical phase transition between those two regimes: high current large deviations are extensive in the system size, and the typical states associated to them are Coulomb gases, which are highly correlated; low current large deviations do not depend on the system size, and the typical states associated to them are anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate our results numerically on a simple example, and we interpret the transition in terms of the current pushing beyond its maximal hydrodynamic value, as well as relate it to the appearance of Tracy-Widom distributions in the relaxation statistics of such models. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Alexandre Lazarescu was selected by the Editorial Board of J. Phys. A as an Emerging Talent.
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Radiation Doses to Skin from Dermal Contamination
2010-10-01
included studies of deposition of particles on skin, hair or clothing of human volunteers and on samples of rat skin or other materials (filter paper ...Particle size probably is the most important parameter that affects interception and retention on skin. In a theoretical part of their paper , Asset and...about 20% of the particles of either diameter (standard deviation about 11%) from such surfaces as cotton, paper , wood, or plastic. The efficiency
Sterile Basics of Compounding: Relationship Between Syringe Size and Dosing Accuracy.
Kosinski, Tracy M; Brown, Michael C; Zavala, Pedro J
2018-01-01
The purpose of this study was to investigate the accuracy and reproducibility of a 2-mL volume injection using a 3-mL and 10-mL syringe with pharmacy student compounders. An exercise was designed to assess each student's accuracy in compounding a sterile preparation with the correct 4-mg strength using a 3-mL and 10-mL syringe. The average ondansetron dose when compounded with the 3-mL syringe was 4.03 mg (standard deviation ± 0.45 mg), which was not statistically significantly different than the intended 4-mg desired dose (P=0.497). The average ondansetron dose when compounded with the 10-mL syringe was 4.18 mg (standard deviation + 0.68 mg), which was statistically significantly different than the intended 4-mg desired dose (P=0.002). Additionally, there also was a statistically significant difference in the average ondansetron dose compounded using a 3-mL syringe (4.03 mg) and a 10-mL syringe (4.18 mg) (P=0.027). The accuracy and reproducibility of the 2-mL desired dose volume decreased as the compounding syringe size increased from 3 mL to 10 mL. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Effect size calculation in meta-analyses of psychotherapy outcome research.
Hoyt, William T; Del Re, A C
2018-05-01
Meta-analysis of psychotherapy intervention research normally examines differences between treatment groups and some form of comparison group (e.g., wait list control; alternative treatment group). The effect of treatment is normally quantified as a standardized mean difference (SMD). We describe procedures for computing unbiased estimates of the population SMD from sample data (e.g., group Ms and SDs), and provide guidance about a number of complications that may arise related to effect size computation. These complications include (a) incomplete data in research reports; (b) use of baseline data in computing SMDs and estimating the population standard deviation (σ); (c) combining effect size data from studies using different research designs; and (d) appropriate techniques for analysis of data from studies providing multiple estimates of the effect of interest (i.e., dependent effect sizes). Clinical or Methodological Significance of this article: Meta-analysis is a set of techniques for producing valid summaries of existing research. The initial computational step for meta-analyses of research on intervention outcomes involves computing an effect size quantifying the change attributable to the intervention. We discuss common issues in the computation of effect sizes and provide recommended procedures to address them.
Note onset deviations as musical piece signatures.
Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis
2013-01-01
A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Plantar pressure cartography reconstruction from 3 sensors.
Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.
NASA Astrophysics Data System (ADS)
Jonell, T. N.; Li, Y.; Blusztajn, J.; Giosan, L.; Clift, P. D.
2017-12-01
Rare earth element (REE) radioisotope systems, such as neodymium (Nd), have been traditionally used as powerful tracers of source provenance, chemical weathering intensity, and sedimentary processes over geologic timescales. More recently, the effects of physical fractionation (hydraulic sorting) of sediments during transport have called into question the utility of Nd isotopes as a provenance tool. Is source terrane Nd provenance resolvable if sediment transport strongly induces noise? Can grain-size sorting effects be quantified? This study works to address such questions by utilizing grain size analysis, trace element geochemistry, and Nd isotope geochemistry of bulk and grain-size fractions (<63μm, 63-125 μm, 125-250 μm) from the Indus delta of Pakistan. Here we evaluate how grain size effects drive Nd isotope variability and further resolve the total uncertainties associated with Nd isotope compositions of bulk sediments. Results from the Indus delta indicate bulk sediment ɛNd compositions are most similar to the <63 µm fraction as a result of strong mineralogical control on bulk compositions by silt- to clay-sized monazite and/or allanite. Replicate analyses determine that the best reproducibility (± 0.15 ɛNd points) is observed in the 125-250 µm fraction. The bulk and finest fractions display the worst reproducibility (±0.3 ɛNd points). Standard deviations (2σ) indicate that bulk sediment uncertainties are no more than ±1.0 ɛNd points. This argues that excursions of ≥1.0 ɛNd points in any bulk Indus delta sediments must in part reflect an external shift in provenance irrespective of sample composition, grain size, and grain size distribution. Sample standard deviations (2s) estimate that any terrigenous bulk sediment composition should vary no greater than ±1.1 ɛNd points if provenance remains constant. Findings from this study indicate that although there are grain-size dependent Nd isotope effects, they are minimal in the Indus delta such that resolvable provenance-driven trends can be identified in bulk sediment ɛNd compositions over the last 20 k.y., and that overall provenance trends remain consistent with previous findings.
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1981-01-01
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V
2008-05-01
The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Flanagan, Éanna É.; Kumar, Naresh; Wasserman, Ira; Vanderveld, R. Ali
2012-01-01
We study the fluctuations in luminosity distances due to gravitational lensing by large scale (≳35Mpc) structures, specifically voids and sheets. We use a simplified “Swiss cheese” model consisting of a ΛCDM Friedman-Robertson-Walker background in which a number of randomly distributed nonoverlapping spherical regions are replaced by mass-compensating comoving voids, each with a uniform density interior and a thin shell of matter on the surface. We compute the distribution of magnitude shifts using a variant of the method of Holz and Wald , which includes the effect of lensing shear. The standard deviation of this distribution is ˜0.027 magnitudes and the mean is ˜0.003 magnitudes for voids of radius 35 Mpc, sources at redshift zs=1.0, with the voids chosen so that 90% of the mass is on the shell today. The standard deviation varies from 0.005 to 0.06 magnitudes as we vary the void size, source redshift, and fraction of mass on the shells today. If the shell walls are given a finite thickness of ˜1Mpc, the standard deviation is reduced to ˜0.013 magnitudes. This standard deviation due to voids is a factor ˜3 smaller than that due to galaxy scale structures. We summarize our results in terms of a fitting formula that is accurate to ˜20%, and also build a simplified analytic model that reproduces our results to within ˜30%. Our model also allows us to explore the domain of validity of weak-lensing theory for voids. We find that for 35 Mpc voids, corrections to the dispersion due to lens-lens coupling are of order ˜4%, and corrections due to shear are ˜3%. Finally, we estimate the bias due to source-lens clustering in our model to be negligible.
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective.
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-12-01
We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model-based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. © 2015, Human Factors and Ergonomics Society.
Detection and Analysis of the Quality of Ibuprofen Granules
NASA Astrophysics Data System (ADS)
Yu-bin, Ji; Xin, LI; Guo-song, Xin; Qin-bing, Xue
2017-12-01
The Ibuprofen Granules comprehensive quality testing to ensure that it is in accordance with the provisions of Chinese pharmacopoeia. With reference of Chinese pharmacopoeia, the Ibuprofen Granules is tested by UV, HPLC, in terms of grain size checking, volume deviation, weight loss on drying detection, dissolution rate detection, and quality evaluation. Results indicated that Ibuprofen Granules conform to the standards. The Ibuprofen Granules are qualified and should be permitted to be marketed.
NASA Astrophysics Data System (ADS)
Ramirez-Porras, A.
2005-06-01
The structure of p-type porous silicon (PS) has been investigated by the use of transmission electron diffraction (TED) microscopy and image processing. The results suggest the presence of well oriented crystalline phases and polycrystalline phases characterized by random orientation. These phases are believed to be formed by spheres with a mean diameter of 4.3 nm and a standard deviation of 1.3 nm.
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
A meta-analysis on the effectiveness of computer-assisted instruction in science education
NASA Astrophysics Data System (ADS)
Bayraktar, Sule
2000-10-01
The purposes of this study were to determine whether Computer-Assisted Instruction (CAI) had an overall positive effect on student achievement in secondary and college level science education when compared with traditional forms of instruction and to determine whether specific study or program characteristics were related to CAI effectiveness. This study employed a meta analytic research approach. First, the research studies comparing student achievement between CAI and traditional instruction in science were located by using electronic search databases. The search resulted in 42 studies producing 108 effect sizes. Second, the study features and effect sizes for each study were coded. Finally, the effect sizes provided from each study were combined to provide an overall effect size, and relationships between effect sizes and study features were then examined. The overall effect size was found to be 0.273 standard deviations, suggesting that CAI has a small positive effect on student achievement in science education at the college and secondary levels when compared with traditional forms of instruction. An effect size of 0.273 standard deviations indicates that an average student exposed to CAI exceeded the performance of 62% of the students who were taught by using traditional instructional methods. In other words, the typical student moved from the 50th percentile to the 62 nd percentile in science when CAI was used. All variables excluding school level and publication status were found to be related to effect sizes. According to the results of the analysis, CAI was most effective in physics education and had little effect on chemistry and biology achievement. Simulation and tutorial programs had significant effects on student achievement in science education but drill and practice was not found effective. The results also indicated that individual utilization of CAI was preferable. Another finding from the study is that experimenter-developed software was more effective than commercial, and that CAI was more effective than traditional instruction when the duration of treatment was shorter than 4 weeks. Furthermore, the results also indicated that the effectiveness of CAI in science subject areas decreased over the decades.
Low level of polyandry constrains phenotypic plasticity of male body size in mites.
Schausberger, Peter; Walzer, Andreas; Murata, Yasumasa; Osakabe, Masahiro
2017-01-01
Polyandry, i.e. females mating with multiple males, is more common than previously anticipated and potentially provides both direct and indirect fitness benefits to females. The level of polyandry (defined by the lifetime number of male mates of a female) is an important determinant of the occurrence and intensity of sexual selection acting on male phenotypes. While the forces of sexual selection acting on phenotypic male traits such as body size are relatively well understood, sexual selection acting on phenotypic plasticity of these traits is unexplored. We tackled this issue by scrutinizing the link between polyandry and phenotypic plasticity of male body size in two sympatric plant-inhabiting predatory mite species, Phytoseiulus persimilis and Neoseiulus californicus. These two species are similar in life history, ecological niche requirements, mating behavior, polygyny and female body size plasticity but strikingly differ in the level of both polyandry and phenotypic plasticity of male body size (both lower in P. persimilis). We hypothesized that deviations from standard body size, i.e. the size achieved under favorable conditions, incur higher costs for males in the less polyandrous P. persimilis. To test our hypotheses, we conducted two experiments on (i) the effects of male body size on spermatophore transfer in singly mating females and (ii) the effects of mate sequence (switching the order of standard-sized and small males) on mating behavior and paternity success in doubly mating females. In P. persimilis but not N. californicus, small males transferred fewer but larger spermatophores to the females; in both species, females re-mated more likely with standard-sized following small than small following standard-sized males; in P. persimilis, first standard-sized males sired a higher proportion of offspring produced after re-mating by the female than first small males, whereas in N. californicus the paternity success of small and standard-sized males was independent of the mating sequence. Based on our results and pertinent previous studies, which showed that females of P. persimilis, but not N. californicus, prefer mating with standard-sized over small males and allow them fertilizing more eggs, the lack of interspecific difference in female body size plasticity, and the absence of any clue pointing at a role of natural selection, we suggest that the interspecific difference in male body size plasticity is sexually selected. Our study provides an indication of sexual selection constraining plasticity of male phenotypes, suggesting that the level of polyandry may be an important co-determinant of the level of phenotypic plasticity of male body size.
Low level of polyandry constrains phenotypic plasticity of male body size in mites
Walzer, Andreas; Murata, Yasumasa; Osakabe, Masahiro
2017-01-01
Polyandry, i.e. females mating with multiple males, is more common than previously anticipated and potentially provides both direct and indirect fitness benefits to females. The level of polyandry (defined by the lifetime number of male mates of a female) is an important determinant of the occurrence and intensity of sexual selection acting on male phenotypes. While the forces of sexual selection acting on phenotypic male traits such as body size are relatively well understood, sexual selection acting on phenotypic plasticity of these traits is unexplored. We tackled this issue by scrutinizing the link between polyandry and phenotypic plasticity of male body size in two sympatric plant-inhabiting predatory mite species, Phytoseiulus persimilis and Neoseiulus californicus. These two species are similar in life history, ecological niche requirements, mating behavior, polygyny and female body size plasticity but strikingly differ in the level of both polyandry and phenotypic plasticity of male body size (both lower in P. persimilis). We hypothesized that deviations from standard body size, i.e. the size achieved under favorable conditions, incur higher costs for males in the less polyandrous P. persimilis. To test our hypotheses, we conducted two experiments on (i) the effects of male body size on spermatophore transfer in singly mating females and (ii) the effects of mate sequence (switching the order of standard-sized and small males) on mating behavior and paternity success in doubly mating females. In P. persimilis but not N. californicus, small males transferred fewer but larger spermatophores to the females; in both species, females re-mated more likely with standard-sized following small than small following standard-sized males; in P. persimilis, first standard-sized males sired a higher proportion of offspring produced after re-mating by the female than first small males, whereas in N. californicus the paternity success of small and standard-sized males was independent of the mating sequence. Based on our results and pertinent previous studies, which showed that females of P. persimilis, but not N. californicus, prefer mating with standard-sized over small males and allow them fertilizing more eggs, the lack of interspecific difference in female body size plasticity, and the absence of any clue pointing at a role of natural selection, we suggest that the interspecific difference in male body size plasticity is sexually selected. Our study provides an indication of sexual selection constraining plasticity of male phenotypes, suggesting that the level of polyandry may be an important co-determinant of the level of phenotypic plasticity of male body size. PMID:29190832
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
A Cost-effective and Reliable Method to Predict Mechanical Stress in Single-use and Standard Pumps
Dittler, Ina; Dornfeld, Wolfgang; Schöb, Reto; Cocke, Jared; Rojahn, Jürgen; Kraume, Matthias; Eibl, Dieter
2015-01-01
Pumps are mainly used when transferring sterile culture broths in biopharmaceutical and biotechnological production processes. However, during the pumping process shear forces occur which can lead to qualitative and/or quantitative product loss. To calculate the mechanical stress with limited experimental expense, an oil-water emulsion system was used, whose suitability was demonstrated for drop size detections in bioreactors1. As drop breakup of the oil-water emulsion system is a function of mechanical stress, drop sizes need to be counted over the experimental time of shear stress investigations. In previous studies, the inline endoscopy has been shown to be an accurate and reliable measurement technique for drop size detections in liquid/liquid dispersions. The aim of this protocol is to show the suitability of the inline endoscopy technique for drop size measurements in pumping processes. In order to express the drop size, the Sauter mean diameter d32 was used as the representative diameter of drops in the oil-water emulsion. The results showed low variation in the Sauter mean diameters, which were quantified by standard deviations of below 15%, indicating the reliability of the measurement technique. PMID:26274765
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
Comparison of experiments and computations for cold gas spraying through a mask. Part 2
NASA Astrophysics Data System (ADS)
Klinkov, S. V.; Kosarev, V. F.; Ryashin, N. S.
2017-03-01
This paper presents experimental and simulation results of cold spray coating deposition using the mask placed above the plane substrate at different distances. Velocities of aluminum (mean size 30 μm) and copper (mean size 60 μm) particles in the vicinity of the mask are determined. It was found that particle velocities have angular distribution in flow with a representative standard deviation of 1.5-2 degrees. Modeling of coating formation behind the mask with account for this distribution was developed. The results of model agree with experimental data confirming the importance of particle angular distribution for coating deposition process in the masked area.
Plume particle collection and sizing from static firing of solid rocket motors
NASA Technical Reports Server (NTRS)
Sambamurthi, Jay K.
1995-01-01
A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.
Size distribution of radon daughter particles in uranium mine atmospheres.
George, A C; Hinchliffe, L; Sladowski, R
1975-06-01
The size distribution of radon daughters was measured in several uranium mines using four compact diffusion batteries and a round jet cascade impactor. Simultaneously, measurements were made of uncombined fractions of radon daughters, radon concentration, working level and particle concentration. The size distributions found for radon daughters were log normal. The activity median diameters ranged from 0.09 mum to 0.3 mum with a mean value of 0.17 mum. Geometric standard deviations were in the range from 1.3 to 4 with a mean value of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean value of 0.04. The radon daughter sizes in these mines are greater than the sizes assumed by various authors in calculating respiratory tract dose. The disparity may reflect the widening use of diesel-powered equipment in large uranium mines.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Color variations within glacial till, east-central North Dakota--A preliminary investigation
Kelly, T.E.; Baker, Claud H.
1966-01-01
Color variations (orange zones within buff-colored till) in drift in east-central North Dakota are believed to represent two tills of separate origin. Mean size, standard deviation, and number and type of pebbles show greater difference between the two tills than do skewness, kurtosis, and partial chemical analyses. Probably blocks of older till were moved by the last glacier crossing the area and were redeposited in a matrix of younger till.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Firefighter Hand Anthropometry and Structural Glove Sizing: A New Perspective
Hsiao, Hongwei; Whitestone, Jennifer; Kau, Tsui-Ying; Hildreth, Brooke
2015-01-01
Objective We evaluated the current use and fit of structural firefighting gloves and developed an improved sizing scheme that better accommodates the U.S. firefighter population. Background Among surveys, 24% to 30% of men and 31% to 62% of women reported experiencing problems with the fit or bulkiness of their structural firefighting gloves. Method An age-, race/ethnicity-, and gender-stratified sample of 863 male and 88 female firefighters across the United States participated in the study. Fourteen hand dimensions relevant to glove design were measured. A cluster analysis of the hand dimensions was performed to explore options for an improved sizing scheme. Results The current national standard structural firefighting glove-sizing scheme underrepresents firefighter hand size range and shape variation. In addition, mismatch between existing sizing specifications and hand characteristics, such as hand dimensions, user selection of glove size, and the existing glove sizing specifications, is significant. An improved glove-sizing plan based on clusters of overall hand size and hand/finger breadth-to-length contrast has been developed. Conclusion This study presents the most up-to-date firefighter hand anthropometry and a new perspective on glove accommodation. The new seven-size system contains narrower variations (standard deviations) for almost all dimensions for each glove size than the current sizing practices. Application The proposed science-based sizing plan for structural firefighting gloves provides a step-forward perspective (i.e., including two women hand model–based sizes and two wide-palm sizes for men) for glove manufacturers to advance firefighter hand protection. PMID:26169309
Note Onset Deviations as Musical Piece Signatures
Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis
2013-01-01
A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields. PMID:23935971
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Andrew K. H.; Basran, Parminder S.; Thomas, Steven D.
Purpose: To investigate the effects of brachytherapy seed size on the quality of x-ray computed tomography (CT), ultrasound (US), and magnetic resonance (MR) images and seed localization through comparison of the 6711 and 9011 {sup 125}I sources. Methods: For CT images, an acrylic phantom mimicking a clinical implantation plan and embedded with low contrast regions of interest (ROIs) was designed for both the 0.774 mm diameter 6711 (standard) and the 0.508 mm diameter 9011 (thin) seed models (Oncura, Inc., and GE Healthcare, Arlington Heights, IL). Image quality metrics were assessed using the standard deviation of ROIs between the seeds andmore » the contrast to noise ratio (CNR) within the low contrast ROIs. For US images, water phantoms with both single and multiseed arrangements were constructed for both seed sizes. For MR images, both seeds were implanted into a porcine gel and imaged with pelvic imaging protocols. The standard deviation of ROIs and CNR values were used as metrics of artifact quantification. Seed localization within the CT images was assessed using the automated seed finder in a commercial brachytherapy treatment planning system. The number of erroneous seed placements and the average and maximum error in seed placements were recorded as metrics of the localization accuracy. Results: With the thin seeds, CT image noise was reduced from 48.5 {+-} 0.2 to 32.0 {+-} 0.2 HU and CNR improved by a median value of 74% when compared with the standard seeds. Ultrasound image noise was measured at 50.3 {+-} 17.1 dB for the thin seed images and 50.0 {+-} 19.8 dB for the standard seed images, and artifacts directly behind the seeds were smaller and less prominent with the thin seed model. For MR images, CNR of the standard seeds reduced on average 17% when using the thin seeds for all different imaging sequences and seed orientations, but these differences are not appreciable. Automated seed localization required an average ({+-}SD) of 7.0 {+-} 3.5 manual corrections in seed positions for the thin seed scans and 3.0 {+-} 1.2 manual corrections in seed positions for the standard seed scans. The average error in seed placement was 1.2 mm for both seed types and the maximum error in seed placement was 2.1 mm for the thin seed scans and 1.8 mm for the standard seed scans. Conclusions: The 9011 thin seeds yielded significantly improved image quality for CT and US images but no significant differences in MR image quality.« less
Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S; Moses, William W; Qi, Jinyi
2018-03-16
The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1-1.3 over the TOF 500 ps and 1.5-1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.
SU-E-T-276: Dose Calculation Accuracy with a Standard Beam Model for Extended SSD Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisling, K; Court, L; Kirsner, S
2015-06-15
Purpose: While most photon treatments are delivered near 100cm SSD or less, a subset of patients may benefit from treatment at SSDs greater than 100cm. A proposed rotating chair for upright treatments would enable isocentric treatments at extended SSDs. The purpose of this study was to assess the accuracy of the Pinnacle{sup 3} treatment planning system dose calculation for standard beam geometries delivered at extended SSDs with a beam model commissioned at 100cm SSD. Methods: Dose to a water phantom at 100, 110, and 120cm SSD was calculated with the Pinnacle {sup 3} CC convolve algorithm for 6x beams formore » 5×5, 10×10, 20×20, and 30×30cm{sup 2} field sizes (defined at the water surface for each SSD). PDDs and profiles (depths of 1.5, 12.5, and 22cm) were compared to measurements in water with an ionization chamber. Point-by-point agreement was analyzed, as well as agreement in field size defined by the 50% isodose. Results: The deviations of the calculated PDDs from measurement, analyzed from depth of maximum dose to 23cm, were all within 1.3% for all beam geometries. In particular, the calculated PDDs at 10cm depth were all within 0.7% of measurement. For profiles, the deviations within the central 80% of the field were within 2.2% for all geometries. The field sizes all agreed within 2mm. Conclusion: The agreement of the PDDs and profiles calculated by Pinnacle3 for extended SSD geometries were within the acceptability criteria defined by Van Dyk (±2% for PDDs and ±3% for profiles). The accuracy of the calculation of more complex beam geometries at extended SSDs will be investigated to further assess the feasibility of using a standard beam model commissioned at 100cm SSD in Pinnacle3 for extended SSD treatments.« less
NASA Astrophysics Data System (ADS)
Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S.; Moses, William W.; Qi, Jinyi
2018-03-01
The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1–1.3 over the TOF 500 ps and 1.5–1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
Malik, Marek; Hnatkova, Katerina; Batchvarov, Velislav; Gang, Yi; Smetana, Peter; Camm, A John
2004-12-01
Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
Evaluation of internal noise methods for Hotelling observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2005-04-01
Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.
Brülle, Tine; Ju, Wenbo; Niedermayr, Philipp; Denisenko, Andrej; Paschos, Odysseas; Schneider, Oliver; Stimming, Ulrich
2011-12-06
Gold nanoparticles were prepared by electrochemical deposition on highly oriented pyrolytic graphite (HOPG) and boron-doped, epitaxial 100-oriented diamond layers. Using a potentiostatic double pulse technique, the average particle size was varied in the range from 5 nm to 30 nm in the case of HOPG as a support and between < 1 nm and 15 nm on diamond surfaces, while keeping the particle density constant. The distribution of particle sizes was very narrow, with standard deviations of around 20% on HOPG and around 30% on diamond. The electrocatalytic activity towards hydrogen evolution and oxygen reduction of these carbon supported gold nanoparticles in dependence of the particle sizes was investigated using cyclic voltammetry. For oxygen reduction the current density normalized to the gold surface (specific current density) increased for decreasing particle size. In contrast, the specific current density of hydrogen evolution showed no dependence on particle size. For both reactions, no effect of the different carbon supports on electrocatalytic activity was observed.
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
Abraha, Iosief; Cherubini, Antonio; Cozzolino, Francesco; De Florio, Rita; Luchetta, Maria Laura; Rimland, Joseph M; Folletti, Ilenia; Marchesi, Mauro; Germani, Antonella; Orso, Massimiliano; Eusebi, Paolo; Montedori, Alessandro
2015-05-27
To examine whether deviation from the standard intention to treat analysis has an influence on treatment effect estimates of randomised trials. Meta-epidemiological study. Medline, via PubMed, searched between 2006 and 2010; 43 systematic reviews of interventions and 310 randomised trials were included. From each year searched, random selection of 5% of intervention reviews with a meta-analysis that included at least one trial that deviated from the standard intention to treat approach. Basic characteristics of the systematic reviews and randomised trials were extracted. Information on the reporting of intention to treat analysis, outcome data, risk of bias items, post-randomisation exclusions, and funding were extracted from each trial. Trials were classified as: ITT (reporting the standard intention to treat approach), mITT (reporting a deviation from the standard approach), and no ITT (reporting no approach). Within each meta-analysis, treatment effects were compared between mITT and ITT trials, and between mITT and no ITT trials. The ratio of odds ratios was calculated (value <1 indicated larger treatment effects in mITT trials than in other trial categories). 50 meta-analyses and 322 comparisons of randomised trials (from 84 ITT trials, 118 mITT trials, and 108 no ITT trials; 12 trials contributed twice to the analysis) were examined. Compared with ITT trials, mITT trials showed a larger intervention effect (pooled ratio of odds ratios 0.83 (95% confidence interval 0.71 to 0.96), P=0.01; between meta-analyses variance τ(2)=0.13). Adjustments for sample size, type of centre, funding, items of risk of bias, post-randomisation exclusions, and variance of log odds ratio yielded consistent results (0.80 (0.69 to 0.94), P=0.005; τ(2)=0.08). After exclusion of five influential studies, results remained consistent (0.85 (0.75 to 0.98); τ(2)=0.08). The comparison between mITT trials and no ITT trials showed no statistical difference between the two groups (adjusted ratio of odds ratios 0.92 (0.70 to 1.23); τ(2)=0.57). Trials that deviated from the intention to treat analysis showed larger intervention effects than trials that reported the standard approach. Where an intention to treat analysis is impossible to perform, authors should clearly report who is included in the analysis and attempt to perform multiple imputations. © Abraha et al 2015.
getimages: Background derivation and image flattening method
NASA Astrophysics Data System (ADS)
Men'shchikov, Alexander
2017-05-01
getimages performs background derivation and image flattening for high-resolution images obtained with space observatories. It is based on median filtering with sliding windows corresponding to a range of spatial scales from the observational beam size up to a maximum structure width X. The latter is a single free parameter of getimages that can be evaluated manually from the observed image. The median filtering algorithm provides a background image for structures of all widths below X. The same median filtering procedure applied to an image of standard deviations derived from a background-subtracted image results in a flattening image. Finally, a flattened image is computed by dividing the background-subtracted by the flattening image. Standard deviations in the flattened image are now uniform outside sources and filaments. Detecting structures in such radically simplified images results in much cleaner extractions that are more complete and reliable. getimages also reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images. The code (a Bash script) uses FORTRAN utilities from getsources (ascl:1507.014), which must be installed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öztürk, Hande; Noyan, I. Cevdet
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Öztürk, Hande; Noyan, I. Cevdet
2017-08-24
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Thermally stable nanoparticles on supports
Roldan Cuenya, Beatriz; Naitabdi, Ahmed R.; Behafarid, Farzad
2012-11-13
An inverse micelle-based method for forming nanoparticles on supports includes dissolving a polymeric material in a solvent to provide a micelle solution. A nanoparticle source is dissolved in the micelle solution. A plurality of micelles having a nanoparticle in their core and an outer polymeric coating layer are formed in the micelle solution. The micelles are applied to a support. The polymeric coating layer is then removed from the micelles to expose the nanoparticles. A supported catalyst includes a nanocrystalline powder, thin film, or single crystal support. Metal nanoparticles having a median size from 0.5 nm to 25 nm, a size distribution having a standard deviation .ltoreq.0.1 of their median size are on or embedded in the support. The plurality of metal nanoparticles are dispersed and in a periodic arrangement. The metal nanoparticles maintain their periodic arrangement and size distribution following heat treatments of at least 1,000.degree. C.
A Regression Framework for Effect Size Assessments in Longitudinal Modeling of Group Differences
Feingold, Alan
2013-01-01
The use of growth modeling analysis (GMA)--particularly multilevel analysis and latent growth modeling--to test the significance of intervention effects has increased exponentially in prevention science, clinical psychology, and psychiatry over the past 15 years. Model-based effect sizes for differences in means between two independent groups in GMA can be expressed in the same metric (Cohen’s d) commonly used in classical analysis and meta-analysis. This article first reviews conceptual issues regarding calculation of d for findings from GMA and then introduces an integrative framework for effect size assessments that subsumes GMA. The new approach uses the structure of the linear regression model, from which effect sizes for findings from diverse cross-sectional and longitudinal analyses can be calculated with familiar statistics, such as the regression coefficient, the standard deviation of the dependent measure, and study duration. PMID:23956615
Method for forming thermally stable nanoparticles on supports
Roldan Cuenya, Beatriz; Naitabdi, Ahmed R.; Behafarid, Farzad
2013-08-20
An inverse micelle-based method for forming nanoparticles on supports includes dissolving a polymeric material in a solvent to provide a micelle solution. A nanoparticle source is dissolved in the micelle solution. A plurality of micelles having a nanoparticle in their core and an outer polymeric coating layer are formed in the micelle solution. The micelles are applied to a support. The polymeric coating layer is then removed from the micelles to expose the nanoparticles. A supported catalyst includes a nanocrystalline powder, thin film, or single crystal support. Metal nanoparticles having a median size from 0.5 nm to 25 nm, a size distribution having a standard deviation .ltoreq.0.1 of their median size are on or embedded in the support. The plurality of metal nanoparticles are dispersed and in a periodic arrangement. The metal nanoparticles maintain their periodic arrangement and size distribution following heat treatments of at least 1,000.degree. C.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, C; Dave, J
Purpose: To characterize noise for image receptors of digital radiography systems based on pixel variance. Methods: Nine calibrated digital image receptors associated with nine new portable digital radiography systems (Carestream Health, Inc., Rochester, NY) were used in this study. For each image receptor, thirteen images were acquired with RQA5 beam conditions for input detector air kerma ranging from 0 to 110 µGy, and linearized ‘For Processing’ images were extracted. Mean pixel value (MPV), standard deviation (SD) and relative noise (SD/MPV) were obtained from each image using ROI sizes varying from 2.5×2.5 to 20×20 mm{sup 2}. Variance (SD{sup 2}) was plottedmore » as a function of input detector air kerma and the coefficients of the quadratic fit were used to derive structured, quantum and electronic noise coefficients. Relative noise was also fitted as a function of input detector air kerma to identify noise sources. The fitting functions used least-squares approach. Results: The coefficient of variation values obtained using different ROI sizes was less than 1% for all the images. The structured, quantum and electronic coefficients obtained from the quadratic fit of variance (r>0.97) were 0.43±0.10, 3.95±0.27 and 2.89±0.74 (mean ± standard deviation), respectively, indicating that overall the quantum noise was the dominant noise source. However, for one system electronic noise coefficient (3.91) was greater than quantum noise coefficient (3.56) indicating electronic noise to be dominant. Using relative noise values, the power parameter of the fitting equation (|r|>0.93) showed a mean and standard deviation of 0.46±0.02. A 0.50 value for this power parameter indicates quantum noise to be the dominant noise source whereas values around 0.50 indicate presence of other noise sources. Conclusion: Characterizing noise from pixel variance assists in identifying contributions from various noise sources that, eventually, may affect image quality. This approach may be integrated during periodic quality assessments of digital image receptors.« less
Kealey, Susan M; Kim, Youngjoo; Whiting, Wythe L; Madden, David J; Provenzale, James M
2005-08-01
To use diffusion-tensor magnetic resonance (MR) imaging to measure involvement of normal-appearing white matter (WM) immediately adjacent to multiple sclerosis (MS) plaques and thus redefine actual plaque size on diffusion-tensor images through comparison with T2-weighted images of equivalent areas in healthy volunteers. Informed consent was not required given the retrospective nature of the study on an anonymized database. The study complied with requirements of the Health Insurance Portability and Accountability Act. Twelve patients with MS (four men, eight women; mean age, 35 years) and 14 healthy volunteers (six men, eight women; mean age, 25 years) were studied. The authors obtained fractional anisotropy (FA) values in MS plaques and in the adjacent normal-appearing WM in patients with MS and in equivalent areas in healthy volunteers. They placed regions of interest (ROIs) around the periphery of plaques and defined the total ROIs (ie, plaques plus peripheral ROIs) as abnormal if their mean FA values were at least 2 standard deviations below those of equivalent ROIs within equivalent regions in healthy volunteers. The combined area of the plaque and the peripheral ROI was compared with the area of the plaque seen on T2-weighted MR images by means of a Student paired t test (P = .05). The mean plaque size on T2-weighted images was 72 mm2 +/- 21 (standard deviation). The mean plaque FA value was 0.285 +/- 0.088 (0.447 +/- 0.069 in healthy volunteers [P < .001]; mean percentage reduction in FA in MS plaques, 37%). The mean plaque size on FA maps was 91 mm2 +/- 35, a mean increase of 127% compared with the size of the original plaque on T2-weighted images (P = .03). A significant increase in plaque size was seen when normal-appearing WM was interrogated with diffusion-tensor MR imaging. This imaging technique may represent a more sensitive method of assessing disease burden and may have a future role in determining disease burden and activity.
NASA Technical Reports Server (NTRS)
Alexander, Dennis R.
1990-01-01
Research was conducted on characteristics of aerosol sprays using a P/DPA and a laser imaging/video processing system on a NASA MOD-1 air assist nozzle being evaluated for use in aircraft icing research. Benchmark tests were performed on monodispersed particles and on the NASA MOD-1 nozzle under identical lab operating conditions. The laser imaging/video processing system and the P/DPA showed agreement on a calibration tests in monodispersed aerosol sprays of + or - 2.6 micron with a standard deviation of + or - 2.6 micron. Benchmark tests were performed on the NASA MOD-1 nozzle on the centerline and radially at 0.5 inch increments to the outer edge of the spray plume at a distance 2 ft downstream from the exit nozzle. Comparative results at two operation conditions of the nozzle are presented for the two instruments. For the 1st case studied, the deviation in arithmetic mean diameters determined by the two instruments was in a range of 0.1 to 2.8 micron, and the deviation in Sauter mean diameters varied from 0 to 2.2 micron. Severe operating conditions in the 2nd case resulted in the arithmetic mean diameter deviating from 1.4 to 7.1 micron and the deviation in the Sauter mean diameters ranging from 0.4 to 6.7 micron.
NASA Technical Reports Server (NTRS)
Alexander, Dennis R.
1988-01-01
Aerosol spray characterization was done using a P/DPA and a laser imaging/video processing system on a NASA MOD-1 air-assist nozzle being evaluated for use in aircraft icing research. Benchmark tests were performed on monodispersed particles and on the NASA MOD-1 nozzle under identical laboratory operating conditions. The laser imaging/video processing system and the P/DPA showed agreement on calibration tests in monodispersed aerosol sprays of + or - 2.6 microns with a standard deviation of + or - 2.6 microns. Tests were performed on the NASA MOD-1 nozzle on the centerline and radially at one-half inch increments to the outer edge of the spray plume at a distance two feet (0.61 m) downstream from the exit of the nozzle. Comparative results at two operating conditions of the nozzle are presented for the two instruments. For the first case, the deviation in arithmetic mean diameters determined by the two instruments was in a range of 0.1 to 2.8 microns, and the deviation in Sauter mean diameters varied from 0 to 2.2 microns. Operating conditions in the second case were more severe which resulted in the arithmetic mean diameter deviating from 1.4 to 7.1 microns and the deviation in the Sauter mean diameters ranging from 0.4 to 6.7 microns.
Gaunt, D M; Metcalfe, C; Ridd, M
2016-11-01
The Patient-Oriented Eczema Measure (POEM) has been recommended as the core patient-reported outcome measure for trials of eczema treatments. Using data from the Choice of Moisturiser for Eczema Treatment randomized feasibility study, we assess the responsiveness to change and determine the minimal clinically important difference (MCID) of the POEM in young children with eczema. Responsiveness to change by repeated administrations of the POEM was investigated in relation to change recalled using the Parent Global Assessment (PGA) measure. Five methods of determining the MCID of the POEM were employed; three anchor-based methods using PGA as the anchor: the within-patient score change, between-patient score change and sensitivity and specificity method, and two distribution-based methods: effect size estimate and the one half standard deviation of the baseline distribution of POEM scores. Successive POEM scores were found to be responsive to change in eczema severity. The MCID of the POEM change score, in relation to a slight improvement in eczema severity as recalled by parents on the PGA, estimated by the within-patient score change (4.27), the between-patient score change (2.89) and the sensitivity and specificity method (3.00) was similar to the one half standard deviation of the POEM baseline scores (2.94) and the effect size estimate (2.50). The Patient-Oriented Eczema Measure as applied to young children is responsive to change, and the MCID is around 3. This study will encourage the use of POEM and aid in determining sample size for future randomized controlled trials of treatments for eczema in young children. © 2016 The Authors. Allergy Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ghasemi, A.; Borhani, S.; Viparelli, E.; Hill, K. M.
2017-12-01
The Exner equation provides a formal mathematical link between sediment transport and bed morphology. It is typically represented in a discrete formulation where there is a sharp geometric interface between the bedload layer and the bed, below which no particles are entrained. For high temporally and spatially resolved models, this is strictly correct, but typically this is applied in such a way that spatial and temporal fluctuations in the bed surface (bedforms and otherwise) are not captured. This limits the extent to which the exchange between particles in transport and the sediment bed are properly represented, particularly problematic for mixed grain size distributions that exhibit segregation. Nearly two decades ago, Parker (2000) provided a framework for a solution to this dilemma in the form of a probabilistic Exner equation, partially experimentally validated by Wong et al. (2007). We present a computational study designed to develop a physics-based framework for understanding the interplay between physical parameters of the bed and flow and parameters in the Parker (2000) probabilistic formulation. To do so we use Discrete Element Method simulations to relate local time-varying parameters to long-term macroscopic parameters. These include relating local grain size distribution and particle entrainment and deposition rates to long- average bed shear stress and the standard deviation of bed height variations. While relatively simple, these simulations reproduce long-accepted empirically determined transport behaviors such as the Meyer-Peter and Muller (1948) relationship. We also find that these simulations reproduce statistical relationships proposed by Wong et al. (2007) such as a Gaussian distribution of bed heights whose standard deviation increases with increasing bed shear stress. We demonstrate how the ensuing probabilistic formulations provide insight into the transport and deposition of both narrow and wide grain size distribution.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
Inter- and intra-observer variation in soft-tissue sarcoma target definition.
Roberge, D; Skamene, T; Turcotte, R E; Powell, T; Saran, N; Freeman, C
2011-08-01
To evaluate inter- and intra-observer variability in gross tumor volume definition for adult limb/trunk soft tissue sarcomas. Imaging studies of 15 patients previously treated with preoperative radiation were used in this study. Five physicians (radiation oncologists, orthopedic surgeons and a musculoskeletal radiologist) were asked to contour each of the 15 tumors on T1-weighted, gadolinium-enhanced magnetic resonance images. These contours were drawn twice by each physician. The volume and center of mass coordinates for each gross tumor volume were extracted and a Boolean analysis was performed to measure the degree of volume overlap. The median standard deviation in gross tumor volumes across observers was 6.1% of the average volume (range: 1.8%-24.9%). There was remarkably little variation in the 3D position of the gross tumor volume center of mass. For the 15 patients, the standard deviation of the 3D distance between centers of mass ranged from 0.06 mm to 1.7 mm (median 0.1mm). Boolean analysis demonstrated that 53% to 90% of the gross tumor volume was common to all observers (median overlap: 79%). The standard deviation in gross tumor volumes on repeat contouring was 4.8% (range: 0.1-14.4%) with a standard deviation change in the position of the center of mass of 0.4mm (range: 0mm-2.6mm) and a median overlap of 93% (range: 73%-98%). Although significant inter-observer differences were seen in gross tumor volume definition of adult soft-tissue sarcoma, the center of mass of these volumes was remarkably consistent. Variations in volume definition did not correlate with tumor size. Radiation oncologists should not hesitate to review their contours with a colleague (surgeon, radiologist or fellow radiation oncologist) to ensure that they are not outliers in sarcoma gross tumor volume definition. Protocols should take into account variations in volume definition when considering tighter clinical target volumes. Copyright © 2011 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
NASA Technical Reports Server (NTRS)
Abbott, T. S.; Moen, G. C.
1981-01-01
The weather radar cathode ray tube (CRT) is the prime candidate for presenting cockpit display of traffic information (CDTI) in current, conventionally equipped transport aircraft. Problems may result from this, since the CRT size is not optimized for CDTI applications and the CRT is not in the pilot's primary visual scan area. The impact of display size on the ability of pilots to utilize the traffic information to maintain a specified spacing interval behind a lead aircraft during an approach task was studied. The five display sizes considered are representative of the display hardware configurations of airborne weather radar systems. From a pilot's subjective workload viewpoint, even the smallest display size was usable for performing the self spacing task. From a performane viewpoint, the mean spacing values, which are indicative of how well the pilots were able to perform the task, exhibit the same trends, irrespective of display size; however, the standard deviation of the spacing intervals decreased (performance improves) as the display size increased. Display size, therefore, does have a significant effect on pilot performance.
NASA Astrophysics Data System (ADS)
Wyatt, Jonathan J.; Dowling, Jason A.; Kelly, Charles G.; McKenna, Jill; Johnstone, Emily; Speight, Richard; Henry, Ann; Greer, Peter B.; McCallum, Hazel M.
2017-12-01
There is increasing interest in MR-only radiotherapy planning since it provides superb soft-tissue contrast without the registration uncertainties inherent in a CT-MR registration. However, MR images cannot readily provide the electron density information necessary for radiotherapy dose calculation. An algorithm which generates synthetic CTs for dose calculations from MR images of the prostate using an atlas of 3 T MR images has been previously reported by two of the authors. This paper aimed to evaluate this algorithm using MR data acquired at a different field strength and a different centre to the algorithm atlas. Twenty-one prostate patients received planning 1.5 T MR and CT scans with routine immobilisation devices on a flat-top couch set-up using external lasers. The MR receive coils were supported by a coil bridge. Synthetic CTs were generated from the planning MR images with (sCT1V ) and without (sCT) a one voxel body contour expansion included in the algorithm. This was to test whether this expansion was required for 1.5 T images. Both synthetic CTs were rigidly registered to the planning CT (pCT). A 6 MV volumetric modulated arc therapy plan was created on the pCT and recalculated on the sCT and sCT1V . The synthetic CTs’ dose distributions were compared to the dose distribution calculated on the pCT. The percentage dose difference at isocentre without the body contour expansion (sCT-pCT) was Δ D_sCT=(0.9 +/- 0.8) % and with (sCT1V -pCT) was Δ D_sCT1V=(-0.7 +/- 0.7) % (mean ± one standard deviation). The sCT1V result was within one standard deviation of zero and agreed with the result reported previously using 3 T MR data. The sCT dose difference only agreed within two standard deviations. The mean ± one standard deviation gamma pass rate was Γ_sCT = 96.1 +/- 2.9 % for the sCT and Γ_sCT1V = 98.8 +/- 0.5 % for the sCT1V (with 2% global dose difference and 2~mm distance to agreement gamma criteria). The one voxel body contour expansion improves the synthetic CT accuracy for MR images acquired at 1.5 T but requires the MR voxel size to be similar to the atlas MR voxel size. This study suggests that the atlas-based algorithm can be generalised to MR data acquired using a different field strength at a different centre.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
NASA Astrophysics Data System (ADS)
Dong, Qingchen; Qu, Wenshan; Liang, Wenqing; Guo, Kunpeng; Xue, Haibin; Guo, Yuanyuan; Meng, Zhengong; Ho, Cheuk-Lam; Leung, Chi-Wah; Wong, Wai-Yeung
2016-03-01
Ferromagnetic (L10 phase) CoPt alloy nanoparticles (NPs) with extremely high magnetocrystalline anisotropy are promising candidates for the next generation of ultrahigh-density data storage systems. It is a challenge to generate L10 CoPt NPs with high coercivity, controllable size, and a narrow size distribution. We report here the fabrication of L10 CoPt NPs by employing a heterobimetallic CoPt-containing polymer as a single-source precursor. The average size of the resulting L10 CoPt NPs is 3.4 nm with a reasonably narrow size standard deviation of 0.58 nm. The coercivity of L10 CoPt NPs is 0.54 T which is suitable for practical application. We also fabricated the L10 CoPt NP-based nanoline and nanodot arrays through nanoimprinting the polymer blend of CoPt-containing metallopolymer and polystyrene followed by pyrolysis. The successful transfer of the pre-defined patterns of the stamps onto the surface of the polymer blend implies that this material holds great application potential as a data storage medium.Ferromagnetic (L10 phase) CoPt alloy nanoparticles (NPs) with extremely high magnetocrystalline anisotropy are promising candidates for the next generation of ultrahigh-density data storage systems. It is a challenge to generate L10 CoPt NPs with high coercivity, controllable size, and a narrow size distribution. We report here the fabrication of L10 CoPt NPs by employing a heterobimetallic CoPt-containing polymer as a single-source precursor. The average size of the resulting L10 CoPt NPs is 3.4 nm with a reasonably narrow size standard deviation of 0.58 nm. The coercivity of L10 CoPt NPs is 0.54 T which is suitable for practical application. We also fabricated the L10 CoPt NP-based nanoline and nanodot arrays through nanoimprinting the polymer blend of CoPt-containing metallopolymer and polystyrene followed by pyrolysis. The successful transfer of the pre-defined patterns of the stamps onto the surface of the polymer blend implies that this material holds great application potential as a data storage medium. Electronic supplementary information (ESI) available: PXRD, EDX and SEM original data. See DOI: 10.1039/c6nr00034g
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Yu, Peiran; Hu, Qingjing; Li, Kai; Zhu, Yujiao; Liu, Xiaohuan; Gao, Huiwang; Yao, Xiaohong
2016-12-01
In this study, we characterized dimethylaminium (DMA + ) and trimethylaminium (TMA + ) in size-segregated atmospheric particles during three cruise campaigns in the marginal seas of China and one cruise campaign mainly in the northwest Pacific Ocean (NWPO). An 14-stage nano-MOUDI sampler was utilized for sampling atmospheric particles ranging from 18μm to 0.010μm. Among the four cruise campaigns, the highest concentrations of DMA + and TMA + in PM 10 were observed over the South Yellow Sea (SYS) in August 2015, i.e., 0.76±0.12nmolm -3 for DMA + (average value±standard deviation) and 0.93±0.13nmolm -3 for TMA + . The lowest values were observed over the NWPO in April 2015, i.e., 0.28±0.16nmolm -3 for DMA + and 0.22±0.12nmolm -3 for TMA + . In general, size distributions of the two ions exhibited a bi-modal pattern, i.e., one mode at 0.01-0.1μm and the other at 0.1-1.8μm. The two ions' mode at 0.01-0.1μm was firstly observed. The mode was largely enhanced in samples collected over the SYS in August 2015, leading to high mole ratios of (DMA + +TMA + )/NH 4 + in PM 0.1 (0.4±0.8, median value±standard deviation) and the ions' concentrations in PM 0.1 accounting for ~10% and ~40% of their corresponding concentrations in PM 10 . This implied that (DMA + +TMA + ) likely played an important role in neutralizing acidic species in the smaller particles. Using SO 4 2- , NO 3 - and NH 4 + as references, we confirm that the elevated concentrations of DMA + and TMA + in the 0.01-0.1μm size range were probably real signals rather than sampling artifacts. Copyright © 2016 Elsevier B.V. All rights reserved.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Solid models for CT/MR image display: accuracy and utility in surgical planning
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; Yue, Alvin; Ammirati, Mario; Kioumehr, Farhad; Turner, Scott
1991-05-01
Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. Although this life-size anatomic model is more easily understandable by the surgeon, its accuracy and true surgical utility remain untested. We have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the model with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of 99.6 percent. Because of the ease of exact voxel localization on the model, its precision was high with the standard deviation of measurement of 0.71 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents our accuracy study and discussed ways of assessing the quality of neurosurgical plans when 3-D models a made available as planning tools.
Lee, Joo Yong; Kim, Jae Heon; Kang, Dong Hyuk; Chung, Doo Yong; Lee, Dae Hun; Do Jung, Hae; Kwon, Jong Kyou; Cho, Kang Su
2016-01-01
We investigated whether stone heterogeneity index (SHI), which a proxy of such variations, was defined as the standard deviation of a Hounsfield unit (HU) on non-contrast computed tomography (NCCT), can be a novel predictor for shock-wave lithotripsy (SWL) outcomes in patients with ureteral stones. Medical records were obtained from the consecutive database of 1,519 patients who underwent the first session of SWL for urinary stones between 2005 and 2013. Ultimately, 604 patients with radiopaque ureteral stones were eligible for this study. Stone related variables including stone size, mean stone density (MSD), skin-to-stone distance, and SHI were obtained on NCCT. Patients were classified into the low and high SHI groups using mean SHI and compared. One-session success rate in the high SHI group was better than in the low SHI group (74.3% vs. 63.9%, P = 0.008). Multivariate logistic regression analyses revealed that smaller stone size (OR 0.889, 95% CI: 0.841–0.937, P < 0.001), lower MSD (OR 0.995, 95% CI: 0.994–0.996, P < 0.001), and higher SHI (OR 1.011, 95% CI: 1.008–1.014, P < 0.001) were independent predictors of one-session success. The radiologic heterogeneity of urinary stones or SHI was an independent predictor for SWL success in patients with ureteral calculi and a useful clinical parameter for stone fragility. PMID:27035621
Lee, Joo Yong; Kim, Jae Heon; Kang, Dong Hyuk; Chung, Doo Yong; Lee, Dae Hun; Do Jung, Hae; Kwon, Jong Kyou; Cho, Kang Su
2016-04-01
We investigated whether stone heterogeneity index (SHI), which a proxy of such variations, was defined as the standard deviation of a Hounsfield unit (HU) on non-contrast computed tomography (NCCT), can be a novel predictor for shock-wave lithotripsy (SWL) outcomes in patients with ureteral stones. Medical records were obtained from the consecutive database of 1,519 patients who underwent the first session of SWL for urinary stones between 2005 and 2013. Ultimately, 604 patients with radiopaque ureteral stones were eligible for this study. Stone related variables including stone size, mean stone density (MSD), skin-to-stone distance, and SHI were obtained on NCCT. Patients were classified into the low and high SHI groups using mean SHI and compared. One-session success rate in the high SHI group was better than in the low SHI group (74.3% vs. 63.9%, P = 0.008). Multivariate logistic regression analyses revealed that smaller stone size (OR 0.889, 95% CI: 0.841-0.937, P < 0.001), lower MSD (OR 0.995, 95% CI: 0.994-0.996, P < 0.001), and higher SHI (OR 1.011, 95% CI: 1.008-1.014, P < 0.001) were independent predictors of one-session success. The radiologic heterogeneity of urinary stones or SHI was an independent predictor for SWL success in patients with ureteral calculi and a useful clinical parameter for stone fragility.
NASA Astrophysics Data System (ADS)
Rata, Mihaela; Salomir, Rares; Umathum, Reiner; Jenne, Jürgen; Lafon, Cyril; Cotton, François; Bock, Michael
2008-11-01
High-intensity contact ultrasound (HICU) under MRI guidance may provide minimally invasive treatment of endocavitary digestive tumors in the esophagus, colon or rectum. In this study, a miniature receive-only coil was integrated into an endoscopic ultrasound applicator to offer high-resolution MRI guidance of thermotherapy. A cylindrical plastic support with an incorporated single element flat transducer (9.45 MHz, water cooling tip) was made and equipped with a rectangular RF loop coil surrounding the active element. The integrated coil provided significantly higher sensitivity than a four-element extracorporeal phased array coil, and the standard deviation of the MR thermometry (SDT) improved up to a factor of 7 at 10 mm depth in tissue. High-resolution morphological images (T1w-TFE and IR-T1w-TSE with a voxel size of 0.25 × 0.25 × 3 mm3) and accurate thermometry data (the PRFS method with a voxel size of 0.5 × 0.5 × 5 mm3, 2.2 s/image, 0.3 °C voxel-wise SDT) were acquired in an ex vivo esophagus sample, on a clinical 1.5T scanner. The endoscopic device was actively operated under automatic temperature control, demonstrating a high level of accuracy (1.7% standard deviation, 1.1% error of mean value), which indicates that this technology may be suitable for HICU therapy of endoluminal cancer.
The impact of electronic health record use on physician productivity.
Adler-Milstein, Julia; Huckman, Robert S
2013-11-01
To examine the impact of the degree of electronic health record (EHR) use and delegation of EHR tasks on clinician productivity in ambulatory settings. We examined EHR use in primary care practices that implemented a web-based EHR from athenahealth (n = 42) over 3 years (695 practice-month observations). Practices were predominantly small and spread throughout the country. Data came from athenahealth practice management system and EHR task logs. We developed monthly measures of EHR use and delegation to support staff from task logs. Productivity was measured using work relative value units (RVUs). Using fixed effects models, we assessed the independent impacts on productivity of EHR use and delegation. We then explored the interaction between these 2 strategies and the role of practice size. Greater EHR use and greater delegation were independently associated with higher levels of productivity. An increase in EHR use of 1 standard deviation resulted in a 5.3% increase in RVUs per clinician workday; an increase in delegation of EHR tasks of 1 standard deviation resulted in an 11.0% increase in RVUs per clinician workday (P <.05 for both). Further, EHR use and delegation had a positive joint impact on productivity in large practices (coefficient, 0.058; P <.05), but a negative joint impact on productivity in small practices (coefficient, -0.142; P <.01). Clinicians in practices that increased EHR use and delegated EHR tasks were more productive, but practice size determined whether the 2 strategies were complements or substitutes.
Gravitational Effects on Closed-Cellular-Foam Microstructure
NASA Technical Reports Server (NTRS)
Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas
1996-01-01
Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.
Investigation of writing error in staggered heated-dot magnetic recording systems
NASA Astrophysics Data System (ADS)
Tipcharoen, W.; Warisarn, C.; Tongsomporn, D.; Karns, D.; Kovintavewat, P.
2017-05-01
To achieve an ultra-high storage capacity, heated-dot magnetic recording (HDMR) has been proposed, which heats a bit-patterned medium before recording data. Generally, an error during the HDMR writing process comes from several sources; however, we only investigate the effects of staggered island arrangement, island size fluctuation caused by imperfect fabrication, and main pole position fluctuation. Simulation results demonstrate that a writing error can be minimized by using a staggered array (hexagonal lattice) instead of a square array. Under the effect of main pole position fluctuation, the writing error is higher than the system without main pole position fluctuation. Finally, we found that the error percentage can drop below 10% when the island size is 8.5 nm and the standard deviation of the island size is 1 nm in the absence of main pole jitter.
Ultrathin pyrolytic carbon films on a magnetic substrate
NASA Astrophysics Data System (ADS)
Umair, Ahmad; Raza, Tehseen Z.; Raza, Hassan
2016-07-01
We report the growth of ultrathin pyrolytic carbon (PyC) films on nickel substrate by using chemical vapor deposition at 1000 °C under methane ambience. We find that the ultra-fast cooling is crucial for PyC film uniformity by controlling the segregation of carbon on nickel. We characterize the in-plane crystal size of the PyC film by using Raman spectroscopy. The Raman peaks at ˜1354 and ˜1584 cm-1 wavenumbers are used to extract the D and G bands. The corresponding peak intensities are then used in an excitation energy dependent equation to calculate the in-plane crystal size. Using Raman area mapping, the mean value of in-plane crystal size over an area of 100 μm × 100 μm is about 22.9 nm with a standard deviation of about 2.4 nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbique, G; Anderson, J; Guild, J
Purpose: The National Lung Screening Trial mandated manual low dose CT technique factors, where up to a doubling of radiation output could be used over a regular to large patient size range. Recent guidance from the AAPM and ACR for lung cancer CT screening recommends radiation output adjustment for patient size either through AEC or a manual technique chart. This study evaluated the use of AEC for output control and dose reduction. Methods: The study was performed on a multidetector helical CT scanner (Aquillion ONE, Toshiba Medical) equipped with iterative reconstruction (ADIR-3D), AEC was adjusted with a standard deviation (SD)more » image quality noise index. The protocol SD parameter was incrementally increased to reduce patient population dose while image quality was evaluated by radiologist readers scoring the clinical utility of images on a Likert scale. Results: Plots of effective dose vs. body size (water cylinder diameter reported by the scanner) demonstrate monotonic increase in patient dose with increasing patient size. At the initial SD setting of 19 the average CTDIvol for a standard size patient was ∼ 2.0 mGy (1.2 mSv effective dose). This was reduced to ∼1.0 mGy (0.5 mSv) at an SD of 25 with no noticeable reduction in clinical utility of images as demonstrated by Likert scoring. Plots of effective patient diameter and BMI vs body size indicate that these metrics could also be used for manual technique charts. Conclusion: AEC offered consistent and reliable control of radiation output in this study. Dose for a standard size patient was reduced to one-third of the 3 mGy CTDIvol limit required for ACR accreditation of lung cancer CT screening. Gary Arbique: Research Grant, Toshiba America Medical Systems; Cecelia Brewington: Research Grant, Toshiba America Medical Systems; Di Zhang: Employee, Toshiba America Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M-D.
2000-08-23
Internal combustion engines are a major source of airborne particulate matter (PM). The size of the engine PM is in the sub-micrometer range. The number of engine particles per unit volume is high, normally in the range of 10{sup 12} to 10{sup 14}. To measure the size distribution of the engine particles dilution of an aerosol sample is required. A diluter utilizing a venturi ejector mixing technique is commercially available and tested. The purpose of this investigation was to determine if turbulence created by the ejector in the mini-dilutor changes the size of particles passing through it. The results ofmore » the NaCl aerosol experiments show no discernible difference in the geometric mean diameter and geometric standard deviation of particles passing through the ejector. Similar results were found for the DOP particles. The ratio of the total number concentrations before and after the ejector indicates that a dilution ratio of approximately 20 applies equally for DOP and NaCl particles. This indicates the dilution capability of the ejector is not affected by the particle composition. The statistical analysis results of the first and second moments of a distribution indicate that the ejector may not change the major parameters (e.g., the geometric mean diameter and geometric standard deviation) characterizing the size distributions of NaCl and DOP particles. However, when the skewness was examined, it indicates that the ejector modifies the particle size distribution significantly. The ejector could change the skewness of the distribution in an unpredictable and inconsistent manner. Furthermore, when the variability of particle counts in individual size ranges as a result of the ejector is examined, one finds that the variability is greater for DOP particles in the size range of 40-150 nm than for NaCl particles in the size range of 30 to 350 nm. The numbers or particle counts in this size region are high enough that the Poisson counting errors are small (<10%) compared with the tail regions. This result shows that the ejector device could have a higher bin-to-bin counting uncertainty for ''soft'' particles such as DOP than for a solid dry particle like NaCl. The results suggest that it may be difficult to precisely characterize the size distribution of particles ejected from the mini-dilution system if the particle is not solid.« less
Comparison of absorbed-dose-to-water units for Co-60 and high-energy x-rays between PTB and LNE-LNHB
NASA Astrophysics Data System (ADS)
Delaunay, F.; Kapsch, R.-P.; Gouriou, J.; Illemann, J.; Krauss, A.; Le Roy, M.; Ostrowsky, A.; Sommier, L.; Vermesse, D.
2012-10-01
During the Euramet project JRP7 ‘External Beam Cancer Therapy’, PTB and LNE-LNHB used primary standards to determine the absorbed dose to water under IMRT conditions (in small fields). PTB used a water calorimeter to determine the absorbed-dose-to-water references in 6 MV and 10 MV beams for field sizes of 10 cm × 10 cm and 3 cm × 3 cm while LNE-LNHB used graphite calorimeters in 6 MV and 12 MV beams for field sizes of 10 cm × 10 cm, 4 cm × 4 cm and 2 cm × 2 cm. The purpose of this study is to compare PTB and LNE-LNHB new absorbed-dose-to-water references. LNE-LNHB sent an Exradin A1SL ionization chamber traceable to its primary standard to the PTB for calibration in 60Co and in linac beams and PTB sent a PTW 31010 ionization chamber traceable to its primary standard to LNE-LNHB for calibration in 60Co and in linac beams. Calculated Sw,air will be used as beam quality specifier for the ionization chamber comparison at different field sizes. The standard uncertainties (k = 1) of PTB and LNE-LNHB calibration coefficients lie respectively between 0.25% (60Co) and 0.40% (linac) and between 0.29% and 0.46%. PTB and LNE-LNHB absorbed-dose-to-water references developed for this project, based respectively on water calorimetry and on graphite calorimetry, agree within 1.5 standard deviations for field size of 10 cm × 10 cm down to 2 cm × 2 cm and for beams of 6 MV to 10 MV.
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
The kilometer-sized Main Belt asteroid population revealed by Spitzer
NASA Astrophysics Data System (ADS)
Ryan, E. L.; Mizuno, D. R.; Shenoy, S. S.; Woodward, C. E.; Carey, S. J.; Noriega-Crespo, A.; Kraemer, K. E.; Price, S. D.
2015-06-01
Aims: Multi-epoch Spitzer Space Telescope 24 μm data is utilized from the MIPSGAL and Taurus Legacy surveys to detect asteroids based on their relative motion. Methods: Infrared detections are matched to known asteroids and average diameters and albedos are derived using the near Earth asteroid thermal model (NEATM) for 1865 asteroids ranging in size from 0.2 to 169 km. A small subsample of these objects was also detected by IRAS or MSX and the single wavelength albedo and diameter fits derived from these data are within the uncertainties of the IRAS and/or MSX derived albedos and diameters and available occultation diameters, which demonstrates the robustness of our technique. Results: The mean geometric albedo of the small Main Belt asteroids in this sample is pV = 0.134 with a sample standard deviation of 0.106. The albedo distribution of this sample is far more diverse than the IRAS or MSX samples. The cumulative size-frequency distribution of asteroids in the Main Belt at small diameters is directly derived and a 3σ deviation from the fitted size-frequency distribution slope is found near 8 km. Completeness limits of the optical and infrared surveys are discussed. Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A42
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
SU-F-T-177: Impacts of Gantry Angle Dependent Scanning Beam Properties for Proton Treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Y; Clasie, B; Lu, H
Purpose: In pencil beam scanning (PBS), the delivered spot MU, position and size are slightly different at different gantry angles. We investigated the level of delivery uncertainty at different gantry angles through a log file analysis. Methods: 34 PBS fields covering full 360 degrees gantry angle spread were collected retrospectively from 28 patients treated at our institution. All fields were delivered at zero gantry angle and the prescribed gantry angle, and measured at isocenter with the MatriXX 2D array detector at the prescribed gantry angle. The machine log files were analyzed to extract the delivered MU per spot and themore » beam position from the strip ionization chambers in the treatment nozzle. The beam size was separately measured as a function of gantry angle and beam energy. Using this information, the dose was calculated in a water phantom at both gantry angles and compared to the measurement using the 3D γ-index at 2mm/2%. Results: The spot-by-spot difference between the beam position in the log files from the delivery at the two gantry angles has a mean of 0.3 and 0.4 mm and a standard deviation of 0.6 and 0.7 mm for × and y directions, respectively. Similarly, the spot-by-spot difference between the MU in the log files from the delivery at the two gantry angles has a mean 0.01% and a standard deviation of 0.7%. These small deviations lead to an excellent agreement in dose calculations with an average γ pass rate for all fields being approximately 99.7%. When each calculation is compared to the measurement, a high correlation in γ was also found. Conclusion: Using machine logs files, we verified that PBS beam delivery at different gantry angles are sufficiently small and the planned spot position and MU. This study brings us one step closer to simplifying our patient-specific QA.« less
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig
2018-06-16
To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.
Old-growth and mature forests near spotted owl nests in western Oregon
NASA Technical Reports Server (NTRS)
Ripple, William J.; Johnson, David H.; Hershey, K. T.; Meslow, E. Charles
1995-01-01
We investigated how the amount of old-growth and mature forest influences the selection of nest sites by northern spotted owls (Strix occidentalis caurina) in the Central Cascade Mountains of Oregon. We used 7 different plot sizes to compare the proportion of mature and old-growth forest between 30 nest sites and 30 random sites. The proportion of old-growth and mature forest was significantly greater at nests sites than at random sites for all plot sizes (P less than or equal to 0.01). Thus, management of the spotted owl might require setting the percentage of old-growth and mature forest retained from harvesting at least 1 standard deviation above the mean for the 30 nest sites we examined.
Planar Laser Imaging of Sprays for Liquid Rocket Studies
NASA Technical Reports Server (NTRS)
Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.
1990-01-01
A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.
Scaling laws in the dynamics of crime growth rate
NASA Astrophysics Data System (ADS)
Alves, Luiz G. A.; Ribeiro, Haroldo V.; Mendes, Renio S.
2013-06-01
The increasing number of crimes in areas with large concentrations of people have made cities one of the main sources of violence. Understanding characteristics of how crime rate expands and its relations with the cities size goes beyond an academic question, being a central issue for contemporary society. Here, we characterize and analyze quantitative aspects of murders in the period from 1980 to 2009 in Brazilian cities. We find that the distribution of the annual, biannual and triannual logarithmic homicide growth rates exhibit the same functional form for distinct scales, that is, a scale invariant behavior. We also identify asymptotic power-law decay relations between the standard deviations of these three growth rates and the initial size. Further, we discuss similarities with complex organizations.
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
Evolution of Extragalactic Radio Sources and Quasar/Galaxy Unification
NASA Astrophysics Data System (ADS)
Onah, C. I.; Ubachukwu, A. A.; Odo, F. C.; Onuchukwu, C. C.
2018-04-01
We use a large sample of radio sources to investigate the effects of evolution, luminosity selection and radio source orientation in explaining the apparent deviation of observed angular size - redshift (θ - z) relation of extragalactic radio sources (EGRSs) from the standard model. We have fitted the observed θ - z data with standard cosmological models based on a flat universe (Ω0 = 1). The size evolution of EGRSs has been described as luminosity, temporal and orientation-dependent in the form DP,z,Φ ≍ P±q(1 + z)-m sinΦ, with q=0.3, Φ=59°, m=-0.26 for radio galaxies and q=-0.5, Φ=33°, m=3.1 for radio quasars respectively. Critical points of luminosity, logPcrit=26.33 WHz-1 and logDc=2.51 kpc (316.23 kpc) of the present sample of radio sources were also observed. All the results were found to be consistent with the popular quasar/galaxy unification scheme.
Full wafer size investigation of N+ and P+ co-implanted layers in 4H-SiC
NASA Astrophysics Data System (ADS)
Blanqué, S.; Lyonnet, J.; Pérez, R.; Terziyska, P.; Contreras, S.; Godignon, P.; Mestres, N.; Pascual, J.; Camassel, J.
2005-03-01
We report a full wafer size investigation of the homogeneity of electrical properties in the case of co-implanted nitrogen and phosphorus ions in 4H-SiC semi-insulating wafers. To match standard industrial requirements, implantation was done at room temperature. To achieve a detailed electrical knowledge, we worked on a 35 mm wafer on which 77 different reticules have been processed. Every reticule includes one Hall cross, one Van der Pauw test structure and different TLM patterns. Hall measurements have been made on all 77 different reticules, using an Accent HL5500 Hall System® from BioRad fitted with an home-made support to collect data from room temperature down to about 150 K. At room temperature, we find that the sheet carrier concentration is only 1/4 of the total implanted dose while the average mobility is 80.6 cm2/Vs. The standard deviation is, typically, 1.5 cm2/Vs.
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
Zhang, Lin; Huttin, Olivier; Marie, Pierre-Yves; Felblinger, Jacques; Beaumont, Marine; Chillou, Christian DE; Girerd, Nicolas; Mandry, Damien
2016-11-01
To compare three widely used methods for myocardial infarct (MI) sizing on late gadolinium-enhanced (LGE) magnetic resonance (MR) images: manual delineation and two semiautomated techniques (full-width at half-maximum [FWHM] and n-standard deviation [SD]). 3T phase-sensitive inversion-recovery (PSIR) LGE images of 114 patients after an acute MI (2-4 days and 6 months) were analyzed by two independent observers to determine both total and core infarct sizes (TIS/CIS). Manual delineation served as the reference for determination of optimal thresholds for semiautomated methods after thresholding at multiple values. Reproducibility and accuracy were expressed as overall bias ± 95% limits of agreement. Mean infarct sizes by manual methods were 39.0%/24.4% for the acute MI group (TIS/CIS) and 29.7%/17.3% for the chronic MI group. The optimal thresholds (ie, providing the closest mean value to the manual method) were FWHM30% and 3SD for the TIS measurement and FWHM45% and 6SD for the CIS measurement (paired t-test; all P > 0.05). The best reproducibility was obtained using FWHM. For TIS measurement in the acute MI group, intra-/interobserver agreements, from Bland-Altman analysis, with FWHM30%, 3SD, and manual were -0.02 ± 7.74%/-0.74 ± 5.52%, 0.31 ± 9.78%/2.96 ± 16.62% and -2.12 ± 8.86%/0.18 ± 16.12, respectively; in the chronic MI group, the corresponding values were 0.23 ± 3.5%/-2.28 ± 15.06, -0.29 ± 10.46%/3.12 ± 13.06% and 1.68 ± 6.52%/-2.88 ± 9.62%, respectively. A similar trend for reproducibility was obtained for CIS measurement. However, semiautomated methods produced inconsistent results (variabilities of 24-46%) compared to manual delineation. The FWHM technique was the most reproducible method for infarct sizing both in acute and chronic MI. However, both FWHM and n-SD methods showed limited accuracy compared to manual delineation. J. Magn. Reson. Imaging 2016;44:1206-1217. © 2016 International Society for Magnetic Resonance in Medicine.
Accelerator test of the coded aperture mask technique for gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.
1982-01-01
A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.
Mechanism-Based Design for High-Temperature, High-Performance Composites. Book 3.
1997-09-01
l(e-ß):(e-ß)--4(e:ß) 2 = el3 + -4(en-e33f, (77) 7 2 = 62:a-(e:a)2 = e?2 + 4, (78) where n = e2, ß = I-nn = eiei +e3e3, and the Cartesian...relation, the particles most susceptible to fracture are those at the larger size range of the population . Thus, with increasing standard deviation of...strength variability is associated exclusively with a single population of flaws. The second is based on comparisons of mean strengths of two or more
Performance Characterization of an xy-Stage Applied to Micrometric Laser Direct Writing Lithography.
Jaramillo, Juan; Zarzycki, Artur; Galeano, July; Sandoz, Patrick
2017-01-31
This article concerns the characterization of the stability and performance of a motorized stage used in laser direct writing lithography. The system was built from commercial components and commanded by G-code. Measurements use a pseudo-periodic-pattern (PPP) observed by a camera and image processing is based on Fourier transform and phase measurement methods. The results report that the built system has a stability against vibrations determined by peak-valley deviations of 65 nm and 26 nm in the x and y directions, respectively, with a standard deviation of 10 nm in both directions. When the xy-stage is in movement, it works with a resolution of 0.36 μm, which is an acceptable value for most of research and development (R and D) microtechnology developments in which the typical feature size used is in the micrometer range.
Performance Characterization of an xy-Stage Applied to Micrometric Laser Direct Writing Lithography
Jaramillo, Juan; Zarzycki, Artur; Galeano, July; Sandoz, Patrick
2017-01-01
This article concerns the characterization of the stability and performance of a motorized stage used in laser direct writing lithography. The system was built from commercial components and commanded by G-code. Measurements use a pseudo-periodic-pattern (PPP) observed by a camera and image processing is based on Fourier transform and phase measurement methods. The results report that the built system has a stability against vibrations determined by peak-valley deviations of 65 nm and 26 nm in the x and y directions, respectively, with a standard deviation of 10 nm in both directions. When the xy-stage is in movement, it works with a resolution of 0.36 µm, which is an acceptable value for most of research and development (R and D) microtechnology developments in which the typical feature size used is in the micrometer range. PMID:28146126
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki
2017-02-01
In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.
Last Millennium ENSO-Mean State Interactions in the Tropical Pacific
NASA Astrophysics Data System (ADS)
Wyman, D. A.; Conroy, J. L.; Karamperidou, C.
2017-12-01
The nature and degree of interaction between the mean state of the tropical Pacific and ENSO remains an open question. Here we use high temporal resolution, tropical Pacific sea surface temperature (SST) records from the last millennium to investigate the relationship between ENSO and the tropical Pacific zonal sea surface temperature gradient (hereafter dSST). A dSST time series was created by standardizing, interpolating, and compositing 7 SST records from the western and 3 SST records from the eastern tropical Pacific. Propagating the age uncertainty of each of these records was accomplished through a Monte Carlo Empirical Orthogonal Function analysis. We find last millennium dSST is strong from 700 to 1300 CE, begins to weaken at approximately 1300 CE, and decreases more rapidly at 1700 CE. dSST was compared to 14 different ENSO reconstructions, independent of the records used to create dSST, to assess the nature of the ENSO-mean state relationship. dSST correlations with 50-year standard deviations of ENSO reconstructions are consistently negative, suggesting that more frequent, strong El Niño events on this timescale reduces dSST. To further assess the strength and direction of the ENSO-dSST relationship, moving 100-year standard deviations of ENSO reconstructions were compared to moving 100-year averages of dSST using Cohen's Kappa statistic, which measures categorical agreement. The Li et al. (2011) and Li et al. (2013) Nino 3.4 ENSO reconstructions had the highest agreement with dSST (k=0.80 and 0.70, respectively), with greater ENSO standard deviation coincident with periods of weak dSST. Other ENSO reconstructions showed weaker agreement with dSST, which may be partly due to low sample size. The consistent directional agreement of dSST with ENSO, coupled with the inability of strong ENSO events to develop under a weak SST gradient, suggests periods of more frequent strong El Niño events reduced tropical Pacific dSST on centennial timescales over the last millennium.
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
Piva, Sara R.; Gil, Alexandra B.; Moore, Charity G.; Fitzgerald, G. Kelley
2016-01-01
Objective To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. Design One group pre-post design. Subjects A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). Methods The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Results Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Conclusion Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain. PMID:19229444
Piva, Sara R; Gil, Alexandra B; Moore, Charity G; Fitzgerald, G Kelley
2009-02-01
To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. One group pre-post design. A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain.
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Veale, David; Miles, Sarah; Bramley, Sally; Muir, Gordon; Hodsoll, John
2015-06-01
To systematically review and create nomograms of flaccid and erect penile size measurements. Study key eligibility criteria: measurement of penis size by a health professional using a standard procedure; a minimum of 50 participants per sample. samples with a congenital or acquired penile abnormality, previous surgery, complaint of small penis size or erectile dysfunction. Synthesis methods: calculation of a weighted mean and pooled standard deviation (SD) and simulation of 20,000 observations from the normal distribution to generate nomograms of penis size. Nomograms for flaccid pendulous [n = 10,704, mean (SD) 9.16 (1.57) cm] and stretched length [n = 14,160, mean (SD) 13.24 (1.89) cm], erect length [n = 692, mean (SD) 13.12 (1.66) cm], flaccid circumference [n = 9407, mean (SD) 9.31 (0.90) cm], and erect circumference [n = 381, mean (SD) 11.66 (1.10) cm] were constructed. Consistent and strongest significant correlation was between flaccid stretched or erect length and height, which ranged from r = 0.2 to 0.6. relatively few erect measurements were conducted in a clinical setting and the greatest variability between studies was seen with flaccid stretched length. Penis size nomograms may be useful in clinical and therapeutic settings to counsel men and for academic research. © 2014 The Authors. BJU International © 2014 BJU International.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Xiong, Qingang; Ramirez, Emilio; Pannala, Sreekanth; ...
2015-10-09
The impact of bubbling bed hydrodynamics on temporal variations in the exit tar yield for biomass fast pyrolysis was investigated using computational simulations of an experimental laboratory-scale reactor. A multi-fluid computational fluid dynamics model was employed to simulate the differential conservation equations in the reactor, and this was combined with a multi-component, multi-step pyrolysis kinetics scheme for biomass to account for chemical reactions. The predicted mean tar yields at the reactor exit appear to match corresponding experimental observations. Parametric studies predicted that increasing the fluidization velocity should improve the mean tar yield but increase its temporal variations. Increases in themore » mean tar yield coincide with reducing the diameter of sand particles or increasing the initial sand bed height. However, trends in tar yield variability are more complex than the trends in mean yield. The standard deviation in tar yield reaches a maximum with changes in sand particle size. As a result, the standard deviation in tar yield increases with the increases in initial bed height in freely bubbling state, while reaches a maximum in slugging state.« less
A design study to develop young children's understanding of multiplication and division
NASA Astrophysics Data System (ADS)
Bicknell, Brenda; Young-Loveridge, Jenny; Nguyen, Nhung
2016-12-01
This design study investigated the use of multiplication and division problems to help 5-year-old children develop an early understanding of multiplication and division. One teacher and her class of 15 5-year-old children were involved in a collaborative partnership with the researchers. The design study was conducted over two 4-week periods in May-June and October-November. The focus in this article is on three key aspects of classroom teaching: instructional tasks, the use of representations, and discourse, including the mathematics register. Results from selected pre- and post-assessment tasks within a diagnostic interview showed that there were improvements in addition and subtraction as well as multiplication and division, even though the teaching had used multiplication and division problems. Students made progress on all four operational domains, with effect sizes ranging from approximately two thirds of a standard deviation to 2 standard deviations. Most of the improvement in students' number strategies was in moving from `counting all' to `counting on' and `skip counting'. The findings challenge the idea that learning experiences in addition and subtraction should precede those in multiplication and division as suggested in some curriculum documents.
Improved particle position accuracy from off-axis holograms using a Chebyshev model.
Öhman, Johan; Sjödahl, Mikael
2018-01-01
Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
NASA Astrophysics Data System (ADS)
Winter, H.; Christopher-Allison, E.; Brown, A. L.; Goforth, A. M.
2018-04-01
Herein, we report an aerobic synthesis method to produce bismuth nanoparticles (Bi NPs) with average diameters in the range 40-80 nm using commercially available bismuth triiodide (BiI3) as the starting material; the method uses only readily available chemicals and conventional laboratory equipment. Furthermore, size data from replicates of the synthesis under standard reaction conditions indicate that this method is highly reproducible in achieving Bi NP populations with low standard deviations in the mean diameters. We also investigated the mechanism of the reaction, which we determined results from the reduction of a soluble alkylammonium iodobismuthate precursor species formed in situ. Under appropriate concentration conditions of iodobismuthate anion, we demonstrate that burst nucleation of Bi NPs results from reduction of Bi3+ by the coordinated, redox non-innocent iodide ligands when a threshold temperature is exceeded. Finally, we demonstrate phase transfer and silica coating of the Bi NPs, which results in stable aqueous colloids with retention of size, morphology, and colloidal stability. The resultant, high atomic number, hydrophilic Bi NPs prepared using this synthesis method have potential for application in emerging x-ray contrast and x-ray therapeutic applications.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á
2017-12-01
We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to <74 years) were presented with three different portion sizes of five foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was <30% after compensating the effect of potential outliers with winsorisation. Picture series for all five food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).
Maggin, Daniel M; Swaminathan, Hariharan; Rogers, Helen J; O'Keeffe, Breda V; Sugai, George; Horner, Robert H
2011-06-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Sample size calculation in economic evaluations.
Al, M J; van Hout, B A; Michel, B C; Rutten, F F
1998-06-01
A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.
NASA Technical Reports Server (NTRS)
Woodcock, C. E.; Strahler, A. H.
1984-01-01
Digital images derived by scanning air photos and through acquiring aircraft and spcecraft scanner data were studied. Results show that spatial structure in scenes can be measured and logically related to texture and image variance. Imagery data were used of a South Dakota forest; a housing development in Canoga Park, California; an agricltural area in Mississppi, Louisiana, Kentucky, and Tennessee; the city of Washington, D.C.; and the Klamath National Forest. Local variance, measured as the average standard deviation of brightness values within a three-by-three moving window, reaches a peak at a resolution cell size about two-thirds to three-fourths the size of the objects within the scene. If objects are smaller than the resolution cell size of the image, this peak does not occur and local variance simply decreases with increasing resolution as spatial averaging occurs. Variograms can also reveal the size, shape, and density of objects in the scene.
How accurate is the Pearson r-from-Z approximation? A Monte Carlo simulation study.
Hittner, James B; May, Kim
2012-01-01
The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.
Buma, Brian; Costanza, Jennifer K; Riitters, Kurt
2017-11-21
The scale of investigation for disturbance-influenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact locations are unknown. This research quantified disturbance proportion and pattern (as contagion) at multiple scales across North America. This pattern of scale-associated variability can guide selection of study and management extents, for example, to minimize variance (measured as standard deviation) between any landscapes within an ecoregion. We identified the proportion and pattern of forest disturbance (30 m grain size) across multiple landscape extents up to 180 km 2 . We explored the variance in proportion of disturbed area and the pattern of that disturbance between landscapes (within an ecoregion) as a function of the landscape extent. In many ecoregions, variance between landscapes within an ecoregion was minimal at broad landscape extents (low standard deviation). Gap-dominated regions showed the least variance, while fire-dominated showed the largest. Intensively managed ecoregions displayed unique patterns. A majority of the ecoregions showed low variance between landscapes at some scale, indicating an appropriate extent for incorporating natural regimes and unknown future disturbances was identified. The quantification of the scales of disturbance at the ecoregion level provides guidance for individuals interested in anticipating future disturbances which will occur in unknown spatial locations. Information on the extents required to incorporate disturbance patterns into planning is crucial for that process.
Robust characterization of small grating boxes using rotating stage Mueller matrix polarimeter
NASA Astrophysics Data System (ADS)
Foldyna, M.; De Martino, A.; Licitra, C.; Foucher, J.
2010-03-01
In this paper we demonstrate the robustness of the Mueller matrix polarimetry used in multiple-azimuth configuration. We first demonstrate the efficiency of the method for the characterization of small pitch gratings filling 250 μm wide square boxes. We used a Mueller matrix polarimeter directly installed in the clean room has motorized rotating stage allowing the access to arbitrary conical grating configurations. The projected beam spot size could be reduced to 60x25 μm, but for the measurements reported here this size was 100x100 μm. The optimal values of parameters of a trapezoidal profile model, acquired for each azimuthal angle separately using a non-linear least-square minimization algorithm, are shown for a typical grating. Further statistical analysis of the azimuth-dependent dimensional parameters provided realistic estimates of the confidence interval giving direct information about the accuracy of the results. The mean values and the standard deviations were calculated for 21 different grating boxes featuring in total 399 measured spectra and fits. The results for all boxes are summarized in a table which compares the optical method to the 3D-AFM. The essential conclusion of our work is that the 3D-AFM values always fall into the confidence intervals provided by the optical method, which means that we have successfully estimated the accuracy of our results without using direct comparison with another, non-optical, method. Moreover, this approach may provide a way to improve the accuracy of grating profile modeling by minimizing the standard deviations evaluated from multiple-azimuths results.
De Reu, Paul; Smits, Luc J; Oosterbaan, Herman P; Snijders, Rosalinde J; De Reu-Cuppens, Marga J; Nijhuis, Jan G
2007-01-01
To determine fetal growth in low risk pregnancies at the beginning of the third trimester and to assess the relative importance of fetal gender and maternal parity. Dutch primary care midwifery practice. Retrospective cohort study on 3641 singleton pregnancies seen at a primary care midwifery center in the Netherlands. Parameters used for analysis were fetal abdominal circumference (AC), fetal head circumference (HC), gestational age, fetal gender and maternal parity. Regression analysis was applied to describe variation in AC and HC with gestational age. Means and standard deviations in the present population were compared with commonly used reference charts. Multiple regression analysis was applied to examine whether gender and parity should be taken into account. The fetal AC and HC increased significantly between the 27th and the 33rd week of pregnancy (AC r2=0.3652, P<0.0001; HC r2=0.3301, P<0.0001). Compared to some curves, our means and standard deviations were significantly smaller (at 30+0 weeks AC mean=258+/-13 mm; HC mean=281+/-14 mm), but corresponded well with other curves. Fetal gender was a significant determinant for both AC (P<0.0001) and HC (P<0.0001). Parity contributed significantly to AC only but the difference was small (beta=0.00464). At the beginning of the third trimester, fetal size is associated with fetal gender and, to a lesser extent, with parity. Some fetal growth charts (e.g., Chitty et al.) are more suitable for the low-risk population in the Netherlands than others.
NASA Astrophysics Data System (ADS)
Lee, Gwo-Bin; Chen, Shu-Hui; Huang, Guan-Ruey; Lin, Yen-Heng; Sung, Wang-Chou
2000-08-01
Design and fabrication of microfluidic devices on polymethylmethacrylate (PMMA) substrates using novel microfabrication methods are described. The image of microfluidic devices is transferred from quartz master templates possessing inverse image of the devices to plastic plates by using hot embossing method. The micro channels on master templates are formed by the combination of metal etch mask and wet chemical etching. The micromachined quartz templates can be used repeatedly to fabricate cheap and disposable plastic devices. The reproducibility of the hot embossing method is evaluated after using 10 channels on different plastics. The relative standard deviation of the plastic channel profile from ones on quartz templates is less than 1%. In this study, the PMMA chips have been demonstrated as a micro capillary electrophoresis ((mu) -CE) device for DNA separation and detection. The capability of the fabricated chip for electrophoretic injection and separation is characterized via the analysis of DNA fragments (phi) X174. Results indicate that all of the 11 DNA fragments of the size marker could be identified in less than 3 minutes with relative standard deviations less than 0.4% and 8% for migration time and peak area, respectively. Moreover, with the use of near IR dye, fluorescence signals of the higher molecular weight fragments ($GTR 603 bp in length) could be detected at total DNA concentrations as low as 0.1 (mu) g/mL. In addition to DNA fragments (phi) X174, DNA sizing of hepatitis C viral (HCV) amplicon is also achieved using microchip electrophoresis fabricated on PMMA substrate.
Validation of PCR methods for quantitation of genetically modified plants in food.
Hübner, P; Waiblinger, H U; Pietsch, K; Brodmann, P
2001-01-01
For enforcement of the recently introduced labeling threshold for genetically modified organisms (GMOs) in food ingredients, quantitative detection methods such as quantitative competitive (QC-PCR) and real-time PCR are applied by official food control laboratories. The experiences of 3 European food control laboratories in validating such methods were compared to describe realistic performance characteristics of quantitative PCR detection methods. The limit of quantitation (LOQ) of GMO-specific, real-time PCR was experimentally determined to reach 30-50 target molecules, which is close to theoretical prediction. Starting PCR with 200 ng genomic plant DNA, the LOQ depends primarily on the genome size of the target plant and ranges from 0.02% for rice to 0.7% for wheat. The precision of quantitative PCR detection methods, expressed as relative standard deviation (RSD), varied from 10 to 30%. Using Bt176 corn containing test samples and applying Bt176 specific QC-PCR, mean values deviated from true values by -7to 18%, with an average of 2+/-10%. Ruggedness of real-time PCR detection methods was assessed in an interlaboratory study analyzing commercial, homogeneous food samples. Roundup Ready soybean DNA contents were determined in the range of 0.3 to 36%, relative to soybean DNA, with RSDs of about 25%. Taking the precision of quantitative PCR detection methods into account, suitable sample plans and sample sizes for GMO analysis are suggested. Because quantitative GMO detection methods measure GMO contents of samples in relation to reference material (calibrants), high priority must be given to international agreements and standardization on certified reference materials.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chin-Cheng, E-mail: chen.ccc@gmail.com; Chang, Chang; Mah, Dennis
Purpose: The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. Methods: A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0–226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to themore » beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Results: Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. Conclusions: For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.« less
Technical Note: Spot characteristic stability for proton pencil beam scanning.
Chen, Chin-Cheng; Chang, Chang; Moyers, Michael F; Gao, Mingcheng; Mah, Dennis
2016-02-01
The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0-226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to the beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.
NASA Astrophysics Data System (ADS)
Saxena, D.; Grossman, E. L.; Maupin, C. R.; Roark, B.; O'Dea, A.
2016-12-01
Nitrogen isotopes (15N/14N) have been extensively used to reconstruct trophic structure, anthropogenic nutrient loading, ecosystem dynamics, and nutrient cycling in terrestrial and marine systems. Extending similar efforts to deep time is critical to investigate sources and fluxes of nutrients in past oceans, and explore causes of biotic turnover. To test the fidelity of N-isotope analyses of biogenic carbonate samples by simple bulk combustion, we performed two sets of experiments involving varying proportions of reagent CaCO3 (0, 2, 35 mg) and three organic standards (3.7-47.2 µg) viz. USGS40 (δ15NAir = -4.52‰), USGS41 (δ15NAir = +47.57‰), and in-house standard Rice (δ15NAir = +1.18‰). At high N contents (15-47.2 µg), δ15N values for CaCO3-amended samples are consistently either 0.5‰ higher (USGS40, -4.5‰), equivalent (Rice, 1.2‰), or 0.5‰ lower (USGS41, 47.6‰) relative to unamended samples. The difference thus depends on the δ15N of the standard relative to air. With decreasing N content (10-15 µg), δ15N values for CaCO3-amended samples diverge from expected values, with 35 mg CaCO3 samples diverging at the highest N content and 0 mg CaCO3 samples at the lowest (10 µg). The latter matches the lower sample-size limit for accurate measurement under the experimental conditions. At very low sample size (3.7-10 µg), all unamended standards show decreasing δ15N with decreasing N content, presumably because of non-linearity in instrument electronics and ion source behavior. The δ15N values of amended USGS41 also decrease with decreasing N content, but those of amended USGS40 and Rice samples increase, with samples containing more CaCO3 (35 versus 2 mg) showing greater deviation from expected values. Potential causes for deviation in δ15N values with CaCO3 amendments include N2 contamination from tin capsules and reagent CaCO3, and incomplete combustion due to energy consumption during CaCO3 decomposition. While tin capsules and reagent CaCO3 provide some N background (0.07 Vs and 0.23 Vs [40 mg CaCO3] respectively), mass balance considerations suggest incomplete combustion likely caused the deviation from true values. Nevertheless, for higher N content samples reliable δ15N measurements can be made with simple bulk combustion of carbonate.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Skerl, K; Vinnicombe, S; Giannotti, E; Thomson, K; Evans, A
2015-12-01
To evaluate the influence of the region of interest (ROI) size and lesion diameter on the diagnostic performance of 2D shear wave elastography (SWE) of solid breast lesions. A study group of 206 consecutive patients (age range 21-92 years) with 210 solid breast lesions (70 benign, 140 malignant) who underwent core biopsy or surgical excision was evaluated. Lesions were divided into small (diameter <15 mm, n=112) and large lesions (diameter ≥15 mm, n=98). An ROI with a diameter of 1, 2, and 3 mm was positioned over the stiffest part of the lesion. The maximum elasticity (Emax), mean elasticity (Emean) and standard deviation (SD) for each ROI size were compared to the pathological outcome. Statistical analysis was undertaken using the chi-square test and receiver operating characteristic (ROC) analysis. The ROI size used has a significant impact on the performance of Emean and SD but not on Emax. Youden's indices show a correlation with the ROI size and lesion size: generally, the benign/malignant threshold is lower with increasing ROI size but higher with increasing lesion size. No single SWE parameter has superior performance. Lesion size and ROI size influence diagnostic performance. Copyright © 2015. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.; Greber, Isaac
1997-01-01
A series of experiments were performed to investigate the effects of Mach number variation on the characteristics of the unsteady shock wave/turbulent boundary layer interaction generated by a blunt fin. A single blunt fin hemicylindrical leading edge diameter size was used in all of the experiments which covered the Mach number range from 2.0 to 5.0. The measurements in this investigation included surface flow visualization, static and dynamic pressure measurements, both on centerline and off-centerline of the blunt fin axis. Surface flow visualization and static pressure measurements showed that the spatial extent of the shock wave/turbulent boundary layer interaction increased with increasing Mach number. The maximum static pressure, normalized by the incoming static pressure, measured at the peak location in the separated flow region ahead of the blunt fin was found to increase with increasing Mach number. The mean and standard deviations of the fluctuating pressure signals from the dynamic pressure transducers were found to collapse to self-similar distributions as a function of the distance perpendicular to the separation line. The standard deviation of the pressure signals showed initial peaked distribution, with the maximum standard deviation point corresponding to the location of the separation line at Mach number 3.0 to 5.0. At Mach 2.0 the maximum standard deviation point was found to occur significantly upstream of the separation line. The intermittency distributions of the separation shock wave motion were found to be self-similar profiles for all Mach numbers. The intermittent region length was found to increase with Mach number and decrease with interaction sweepback angle. For Mach numbers 3.0 to 5.0 the separation line was found to correspond to high intermittencies or equivalently to the downstream locus of the separation shock wave motion. The Mach 2.0 tests, however, showed that the intermittent region occurs significantly upstream of the separation line. Power spectral densities measured in the intermittent regions were found to have self-similar frequency distributions when compared as functions of a Strouhal number for all Mach numbers and interaction sweepback angles. The maximum zero-crossing frequencies were found to correspond with the peak frequencies in the power spectra measured in the intermittent region.
Characterization of difference of Gaussian filters in the detection of mammographic regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catarious, David M. Jr.; Baydush, Alan H.; Floyd, Carey E. Jr.
2006-11-15
In this article, we present a characterization of the effect of difference of Gaussians (DoG) filters in the detection of mammographic regions. DoG filters have been used previously in mammographic mass computer-aided detection (CAD) systems. As DoG filters are constructed from the subtraction of two bivariate Gaussian distributions, they require the specification of three parameters: the size of the filter template and the standard deviations of the constituent Gaussians. The influence of these three parameters in the detection of mammographic masses has not been characterized. In this work, we aim to determine how the parameters affect (1) the physical descriptorsmore » of the detected regions (2) the true and false positive rates, and (3) the classification performance of the individual descriptors. To this end, 30 DoG filters are created from the combination of three template sizes and four values for each of the Gaussians' standard deviations. The filters are used to detect regions in a study database of 181 craniocaudal-view mammograms extracted from the Digital Database for Screening Mammography. To describe the physical characteristics of the identified regions, morphological and textural features are extracted from each of the detected regions. Differences in the mean values of the features caused by altering the DoG parameters are examined through statistical and empirical comparisons. The parameters' effects on the true and false positive rate are determined by examining the mean malignant sensitivities and false positives per image (FPpI). Finally, the effect on the classification performance is described by examining the variation in FPpI at the point where 81% of the malignant masses in the study database are detected. Overall, the findings of the study indicate that increasing the standard deviations of the Gaussians used to construct a DoG filter results in a dramatic decrease in the number of regions identified at the expense of missing a small number of malignancies. The sharp reduction in the number of identified regions allowed the identification of textural differences between large and small mammographic regions. We find that the classification performances of the features that achieve the lowest average FPpI are influenced by all three of the parameters.« less
Barker, C.E.; Pawlewicz, M.J.
1993-01-01
In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.
NASA Astrophysics Data System (ADS)
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Taghizadeh, Somayeh; Yang, Claus Chunli; R. Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan
2017-01-01
Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID3D and Quasar GRID3D phantoms were used to evaluate the effects of static magnetic field (B0) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions. PMID:29487771
Fatemi, Ali; Taghizadeh, Somayeh; Yang, Claus Chunli; R Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan
2017-12-18
Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID 3D and Quasar GRID 3D phantoms were used to evaluate the effects of static magnetic field (B 0 ) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
NASA Astrophysics Data System (ADS)
Li, Xiaoliang; Luo, Lei; Li, Pengwei; Yu, Qingkui
2018-03-01
The image sensor in satellite optical communication system may generate noise due to space irradiation damage, leading to deviation for the determination of the light spot centroid. Based on the irradiation test data of CMOS devices, simulated defect spots in different sizes have been used for calculating the centroid deviation value by grey-level centroid algorithm. The impact on tracking & pointing accuracy of the system has been analyzed. The results show that both the amount and the position of irradiation-induced defect pixels contribute to spot centroid deviation. And the larger spot has less deviation. At last, considering the space radiation damage, suggestions are made for the constraints of spot size selection.
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
Top quark forward-backward asymmetry and same-sign top quark pairs.
Berger, Edmond L; Cao, Qing-Hong; Chen, Chuan-Ren; Li, Chong Sheng; Zhang, Hao
2011-05-20
The top quark forward-backward asymmetry measured at the Tevatron collider shows a large deviation from standard model expectations. Among possible interpretations, a nonuniversal Z' model is of particular interest as it naturally predicts a top quark in the forward region of large rapidity. To reproduce the size of the asymmetry, the couplings of the Z' to standard model quarks must be large, inevitably leading to copious production of same-sign top quark pairs at the energies of the Large Hadron Collider (LHC). We explore the discovery potential for tt and ttj production in early LHC experiments at 7-8 TeV and conclude that if no tt signal is observed with 1 fb⁻¹ of integrated luminosity, then a nonuniversal Z' alone cannot explain the Tevatron forward-backward asymmetry.
Hungarian norms for the Harvard Group Scale of Hypnotic Susceptibility, Form A.
Költő, András; Gősi-Greguss, Anna C; Varga, Katalin; Bányai, Éva I
2015-01-01
Hungarian norms for the Harvard Group Scale of Hypnotic Susceptibility, Form A (HGSHS:A) are presented. The Hungarian translation of the HGSHS:A was administered under standard conditions to 434 participants (190 males, 244 females) of several professions. In addition to the traditional self-scoring, hypnotic behavior was also recorded by trained observers. Female participants proved to be more hypnotizable than males and so were psychology students and professionals as compared to nonpsychologists. Hypnotizability varied across different group sizes. The normative data-including means, standard deviations, and indicators of reliability-are comparable with previously published results. The authors conclude that measuring observer-scores increases the ecological validity of the scale. The Hungarian version of the HGSHS:A seems to be a reliable and valid measure of hypnotizability.
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo
2015-03-01
This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, N; Lu, S; Qin, Y
Purpose: To evaluate the dosimetric uncertainty associated with Gafchromic (EBT3) films and establish an absolute dosimetry protocol for Stereotactic Radiosurgery (SRS) and Stereotactic Body Radiotherapy (SBRT). Methods: EBT3 films were irradiated at each of seven different dose levels between 1 and 15 Gy with open fields, and standard deviations of dose maps were calculated at each color channel for evaluation. A scanner non-uniform response correction map was built by registering and comparing film doses to the reference diode array-based dose map delivered with the same doses. To determine the temporal dependence of EBT3 films, the average correction factors of differentmore » dose levels as a function of time were evaluated up to four days after irradiation. An integrated film dosimetry protocol was developed for dose calibration, calibration curve fitting, dose mapping, and profile/gamma analysis. Patient specific quality assurance (PSQA) was performed for 93 SRS/SBRT treatment plans. Results: The scanner response varied within 1% for the field sizes less than 5 × 5 cm{sup 2}, and up to 5% for the field sizes of 10 × 10 cm{sup 2}. The scanner correction method was able to remove visually evident, irregular detector responses found for larger field sizes. The dose response of the film changed rapidly (∼10%) in the first two hours and plateaued afterwards, ∼3% change between 2 and 24 hours. The mean uncertainties (mean of the standard deviations) were <0.5% over the dose range 1∼15Gy for all color channels for the OD response curves. The percentage of points passing the 3%/1mm gamma criteria based on absolute dose analysis, averaged over all tests, was 95.0 ± 4.2. Conclusion: We have developed an absolute film dose dosimetry protocol using EBT3 films. The overall uncertainty has been established to be approximately 1% for SRS and SBRT PSQA. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less
Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J
2017-02-01
Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.
Texture-based segmentation of temperate-zone woodland in panchromatic IKONOS imagery
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Bugnet, Pierre; Cavayas, Francois
2003-08-01
We have performed a study to identify optimal texture parameters for woodland segmentation in a highly non-homogeneous urban area from a temperate-zone panchromatic IKONOS image. Texture images are produced with the sum- and difference-histograms depend on two parameters: window size f and displacement step p. The four texture features yielding the best discrimination between classes are the mean, contrast, correlation and standard deviation. The f-p combinations 17-1, 17-2, 35-1 and 35-2 are those which give the best performance, with an average classification rate of 90%.
Optimizing a remote sensing instrument to measure atmospheric surface pressure
NASA Technical Reports Server (NTRS)
Peckham, G. E.; Gatley, C.; Flower, D. A.
1983-01-01
Atmospheric surface pressure can be remotely sensed from a satellite by an active instrument which measures return echoes from the ocean at frequencies near the 60 GHz oxygen absorption band. The instrument is optimized by selecting its frequencies of operation, transmitter powers and antenna size through a new procedure baesd on numerical simulation which maximizes the retrieval accuracy. The predicted standard deviation error in the retrieved surface pressure is 1 mb. In addition the measurements can be used to retrieve water vapor, cloud liquid water and sea state, which is related to wind speed.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Park, Jong Min; Park, So-Yeon; Chun, Minsoo; Kim, Sang-Tae
2017-08-01
To investigate and improve the domestic standard of radiation therapy in the Republic of Korea. On-site audits were performed for 13 institutions in the Republic of Korea. Six items were investigated by on-site visits of each radiation therapy institution, including collimator, gantry, and couch rotation isocenter check; coincidence between light and radiation fields; photon beam flatness and symmetry; electron beam flatness and symmetry; physical wedge transmission factors; and photon beam and electron beam outputs. The average deviations of mechanical collimator, gantry, and couch rotation isocenter were less than 1mm. Those of radiation isocenter were also less than 1mm. The average difference between light and radiation fields was 0.9±0.6mm for the field size of 20cm×20cm. The average values of flatness and symmetry of the photon beams were 2.9%±0.6% and 1.1%±0.7%, respectively. Those of electron beams were 2.5%±0.7% and 0.6%±1.0%, respectively. Every institutions showed wedge transmission factor deviations less than 2% except one institution. The output deviations of both photon and electron beams were less than ±3% for every institution. Through the on-site audit program, we could effectively detect an inappropriately operating linacs and provide some recommendations. The standard of radiation therapy in Korea is expected to improve through such on-site audits. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Hopper, John L.
2015-01-01
How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filipuzzi, M; Garrigo, E; Venencia, C
2014-06-01
Purpose: To calculate the spatial response function of various radiation detectors, to evaluate the dependence on the field size and to analyze the small fields profiles corrections by deconvolution techniques. Methods: Crossline profiles were measured on a Novalis Tx 6MV beam with a HDMLC. The configuration setup was SSD=100cm and depth=5cm. Five fields were studied (200×200mm2,100×100mm2, 20×20mm2, 10×10mm2and 5×5mm2) and measured were made with passive detectors (EBT3 radiochromic films and TLD700 thermoluminescent detectors), ionization chambers (PTW30013, PTW31003, CC04 and PTW31016) and diodes (PTW60012 and IBA SFD). The results of passive detectors were adopted as the actual beam profile. To calculatemore » the detectors kernels, modeled by Gaussian functions, an iterative process based on a least squares criterion was used. The deconvolutions of the measured profiles were calculated with the Richardson-Lucy method. Results: The profiles of the passive detectors corresponded with a difference in the penumbra less than 0.1mm. Both diodes resolve the profiles with an overestimation of the penumbra smaller than 0.2mm. For the other detectors, response functions were calculated and resulted in Gaussian functions with a standard deviation approximate to the radius of the detector in study (with a variation less than 3%). The corrected profiles resolve the penumbra with less than 1% error. Major discrepancies were observed for cases in extreme conditions (PTW31003 and 5×5mm2 field size). Conclusion: This work concludes that the response function of a radiation detector is independent on the field size, even for small radiation beams. The profiles correction, using deconvolution techniques and response functions of standard deviation equal to the radius of the detector, gives penumbra values with less than 1% difference to the real profile. The implementation of this technique allows estimating the real profile, freeing from the effects of the detector used for the acquisition.« less
The association between body fat and rotator cuff tear: the influence on rotator cuff tear sizes.
Gumina, Stefano; Candela, Vittorio; Passaretti, Daniele; Latino, Gianluca; Venditto, Teresa; Mariani, Laura; Santilli, Valter
2014-11-01
Rotator cuff tear (RCT) has a multifactorial etiology. We hypothesized that obesity may increase the risk of RCT and influence tear size. A case-control design study was used. We studied 381 consecutive patients (180 men, 201 women; mean age ± standard deviation, 65.5 ± 8.52 years; range, 43-78 years) who underwent arthroscopic rotator cuff repair. Tear size was determined intraoperatively. The control group included 220 subjects (103 men, 117 women; mean age ± standard deviation, 65.16 ± 7.24 years; range, 42-77 years) with no RCT. Body weight, height, and bicipital, tricipital, subscapularis, and suprailiac skinfolds of all participants were measured to obtain body mass index (BMI) and the percentage of body fat (%BF). For the purposes of the study, the 601 participants were divided into 2 groups by BMI (group A, BMI ≥ 25; group B, BMI < 25). The odds ratios (ORs) were calculated to investigate whether adiposity affects the risk of RCT. Data were stratified according to gender and age. Multiple linear regression analyses were applied to explore the association between obesity and tear size. The highest ORs for both men (OR, 2.49; 95% confidence interval, 1.41-3.90; P = .0037) and women (OR, 2.31; 95% confidence interval, 1.38-3.62; P = .0071) were for individuals with a BMI ≥ 30; 69% (N = 303) of group A and 48% (N = 78) of group B had RCTs. Patients with RCT had a BMI higher than that of subjects with no RCT in both groups (P = .031, group A; P = .02, group B). BMI and %BF significantly increased from patients with a small tear (BMI, 27.85; %BF, 37.63) to those with a massive RCT (BMI, 29.93; %BF, 39.43). Significant differences were found (P = .004; P = .031). Our results provide evidence that obesity, measured through BMI and %BF, is a significant risk factor for the occurrence and severity of RCT. Copyright © 2014. Published by Elsevier Inc.
Implant Size Availability Affects Reproduction of Distal Femoral Anatomy.
Morris, William Z; Gebhart, Jeremy J; Goldberg, Victor M; Wera, Glenn D
2016-07-01
A total knee arthroplasty system offers more distal femoral implant anterior-posterior (AP) sizes than its predecessor. The purpose of this study is to investigate the impact of increased size availability on an implant system's ability to reproduce the AP dimension of the native distal femur. We measured 200 cadaveric femora with the AP-sizing guides of Zimmer (Warsaw, IN) NexGen (8 sizes) and Zimmer Persona (12 sizes) total knee arthroplasty systems. We defined "size deviation" as the difference in the AP dimension between the anatomic size of the native femur and the closest implant size. We defined satisfactory reproduction of distal femoral dimensions as < 1 mm difference between the implant and native femur size. The NexGen system was associated with a mean 0.46 mm greater implant size deviation than Persona (p < 0.001). When using a 1 mm size deviation as a cutoff for satisfactory replication of the native distal femoral anatomy, 85/200 specimens (42.5%) were a poor fit by NexGen, but a satisfactory fit by Persona. Only 1/200 specimens (0.5%) was a poor fit by Persona, but a satisfactory fit by NexGen (p < 0.001). The novel knee system with 12 versus 8 sizes reproduces the AP dimension of the native distal femur more closely than its predecessor. Further study is needed to determine the clinical impact of these differences. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, N; Chetty, I; Snyder, K
Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL)more » tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less
Ferreri, Matthew; Slagley, Jeremy; Felker, Daniel
2015-01-01
This study compared four treatment protocols to reduce airborne composite fiber particulates during simulated aircraft crash recovery operations. Four different treatments were applied to determine effectiveness in reducing airborne composite fiber particulates as compared to a "no treatment" protocol. Both "gold standard" gravimetric methods and real-time instruments were used to describe mass per volume concentration, particle size distribution, and surface area. The treatment protocols were applying water, wetted water, wax, or aqueous film-forming foam (AFFF) to both burnt and intact tickets of aircraft composite skin panels. The tickets were then cut using a small high-speed rotary tool to simulate crash recovery operations. Aerosol test chamber. None. Airborne particulate control treatments. Measures included concentration units of milligrams per cubic meter of air, particle size distribution as described by both count median diameter and mass median diameter and geometric standard deviation of particles in micrometers, and surface area concentration in units of square micrometers per cubic centimeter. Finally, a Monte Carlo simulation was run on the particle size distribution results. Comparison was made via one-way analysis of variance. A significant difference (p < 0.0001) in idealized particle size distribution was found between the water and wetted water treatments as compared to the other treatments for burnt tickets. Emergency crash recovery operations should include a treatment of the debris with water or wetted water. The resulting increase in particle size will make respiratory protection more effective in protecting the response crews.
Ferreri, Matthew; Slagley, Jeremy; Felker, Daniel
2015-01-01
This study compared four treatment protocols to reduce airborne composite fiber particulates during simulated aircraft crash recovery operations. Four different treatments were applied to determine effectiveness in reducing airborne composite fiber particulates as compared to a "no treatment" protocol. Both "gold standard" gravimetric methods and real-time instruments were used to describe mass per volume concentration, particle size distribution, and surface area. The treatment protocols were applying water, wetted water, wax, or aqueous film-forming foam (AFFF) to both burnt and intact tickets of aircraft composite skin panels. The tickets were then cut using a small high-speed rotary tool to simulate crash recovery operations. Aerosol test chamber. None. Airborne particulate control treatments. Measures included concentration units of milligrams per cubic meter of air, particle size distribution as described by both count median diameter and mass median diameter and geometric standard deviation of particles in micrometers, and surface area concentration in units of square micrometers per cubic centimeter. Finally, a Monte Carlo simulation was run on the particle size distribution results. Comparison was made via one-way analysis of variance. A significant difference (p<0.0001) in idealized particle size distribution was found between the water and wetted water treatments as compared to the other treatments for burnt tickets. Emergency crash recovery operations should include a treatment of the debris with water or wetted water. The resulting increase in particle size will make respiratory protection more effective in protecting the response crews.
Size exclusion deep bed filtration: Experimental and modelling uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badalyan, Alexander, E-mail: alexander.badalyan@adelaide.edu.au; You, Zhenjiang; Aji, Kaiser
A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspendedmore » particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.« less
Anderson, P J; Wilson, J D; Hiller, F C
1989-07-01
Accurate measurement of cigarette smoke particle size distribution is important for estimation of lung deposition. Most prior investigators have reported a mass median diameter (MMD) in the size range of 0.3 to 0.5 micron, with a small geometric standard deviation (GSD), indicating few ultrafine (less than 0.1 micron) particles. A few studies, however, have suggested the presence of ultrafine particles by reporting a smaller count median diameter (CMD). Part of this disparity may be due tot he inefficiency to previous sizing methods in measuring ultrafine size range, to evaluate size distribution of smoke from standard research cigarettes, commercial filter cigarettes, and from marijuana cigarettes with different delta 9-tetrahydrocannabinol contents. Four 35-cm3, 2-s puffs were generated at 60-s intervals, rapidly diluted, and passed through a charge neutralizer and into a 240-L chamber. Size distribution for six cigarettes of each type was measured, CMD and GSD were determined from a computer-generated log probability plot, and MMD was calculated. The size distribution parameters obtained were similar for all cigarettes tested, with an average CMD of 0.1 micron, a MMD of 0.38 micron, and a GSD of 2.0. The MMD found using the EAA is similar to that previously reported, but the CMD is distinctly smaller and the GSD larger, indicating the presence of many more ultrafine particles. These results may explain the disparity of CMD values found in existing data. Ultrafine particles are of toxicologic importance because their respiratory tract deposition is significantly higher than for particles 0.3 to 0.5 micron and because their large surface area facilitates adsorption and delivery of potentially toxic gases to the lung.
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Estimating maize water stress by standard deviation of canopy temperature in thermal imagery
USDA-ARS?s Scientific Manuscript database
A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...
Experience in use of optical theodolite for machine construction
NASA Astrophysics Data System (ADS)
Shereshevskiy, L. M.
1984-02-01
An optical theodolite, an instrument of small size and weight featuring a high-precision horizontal dial, was successfully used in production of forging and pressing equipment at the Voronezh plant. Such a TV-1 theodolite, together with a contact-type indicating device and a mechanism for centering the machined part, is included in a turret goniometer for angular alignment and control of cutting operations. Its micrometer has 1 inch scale divisions, the instrument is designed to give readings with a high degree of stability and reproducibility with the standard deviation of one measurement not exceeding 5 inches. It is particularly useful in production of parts with variable spacing and cross section of grooves or slots, including curvilinear ones. With a universal adapter plate on which guide prisms and an interchangeable gauge pin are mounted, this theodolite can also be used in production of large bevel gears: the same instrument for a wide range of gear sizes, diametral pitches, and tooth profiles. Using the maximum of standard components, this theodolite can be easily assembled at any manufacturing plant.
NASA Astrophysics Data System (ADS)
Jiang, Jingkun; Chen, Da-Ren; Biswas, Pratim
2007-07-01
A flame aerosol reactor (FLAR) was developed to synthesize nanoparticles with desired properties (crystal phase and size) that could be independently controlled. The methodology was demonstrated for TiO2 nanoparticles, and this is the first time that large sets of samples with the same size but different crystal phases (six different ratios of anatase to rutile in this work) were synthesized. The degree of TiO2 nanoparticle agglomeration was determined by comparing the primary particle size distribution measured by scanning electron microscopy (SEM) to the mobility-based particle size distribution measured by online scanning mobility particle spectrometry (SMPS). By controlling the flame aerosol reactor conditions, both spherical unagglomerated particles and highly agglomerated particles were produced. To produce monodisperse nanoparticles, a high throughput multi-stage differential mobility analyser (MDMA) was used in series with the flame aerosol reactor. Nearly monodisperse nanoparticles (geometric standard deviation less than 1.05) could be collected in sufficient mass quantities (of the order of 10 mg) in reasonable time (1 h) that could be used in other studies such as determination of functionality or biological effects as a function of size.
NASA Astrophysics Data System (ADS)
Vázquez-Tarrío, Daniel; Borgniet, Laurent; Liébault, Frédéric; Recking, Alain
2017-05-01
This paper explores the potential of unmanned aerial system (UAS) optical aerial imagery to characterize grain roughness and size distribution in a braided, gravel-bed river (Vénéon River, French Alps). With this aim in view, a Wolman field campaign (19 samples) and five UAS surveys were conducted over the Vénéon braided channel during summer 2015. The UAS consisted of a small quadcopter carrying a GoPro camera. Structure-from-Motion (SfM) photogrammetry was used to extract dense and accurate three-dimensional point clouds. Roughness descriptors (roughness heights, standard deviation of elevation) were computed from the SfM point clouds and were correlated with the median grain size of the Wolman samples. A strong relationship was found between UAS-SfM-derived grain roughness and Wolman grain size. The procedure employed has potential for the rapid and continuous characterization of grain size distribution in exposed bars of gravel-bed rivers. The workflow described in this paper has been successfully used to produce spatially continuous grain size information on exposed gravel bars and to explore textural changes following flow events.
Production of Large-Particle-Size Monodisperse Latexes in Microgravity
NASA Technical Reports Server (NTRS)
Vanderhoff, J. W.; Micale, F. J.; El-Aasser, M. S.; Kornfeld, M.
1985-01-01
A latex is a suspension of very tiny (micrometer-size) plastic spheres in water, stabilized by emulsifiers. The growth of billions of these tiny plastic spheres to sizes larger than can be grown on Earth is attempted while keeping all of them exactly the same size and perfectly spherical. Thus far on several of the Monodisperse Latex Reactor (MLR) flights, the latex spheres have been returned to Earth with standard deviations of better than 1.4%. In microgravity the absence of buoyancy effects has allowed growth of the balls up to 30 micrometers in diameter thus far. The MLR has now flown 5 times on the Shuttle. The MLR has now produced the first commercial space product; that is the first commercial material ever manufactured in space and marketed on Earth. Once it is demonstrated that these large-size-monodisperse latexes can be routinely produced in quantity and quality, they can be marketed for many types of scientific applications. They can be used in biomedical research for such things as drug carriers and tracers in the body, human and animal blood flow studies, membrane and pore-sizing in the body, and medical diagnostic tests.
PLUME-MoM 1.0: A new integral model of volcanic plumes based on the method of moments
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, M.; Neri, A.; Barsotti, S.
2015-08-01
In this paper a new integral mathematical model for volcanic plumes, named PLUME-MoM, is presented. The model describes the steady-state dynamics of a plume in a 3-D coordinate system, accounting for continuous variability in particle size distribution of the pyroclastic mixture ejected at the vent. Volcanic plumes are composed of pyroclastic particles of many different sizes ranging from a few microns up to several centimeters and more. A proper description of such a multi-particle nature is crucial when quantifying changes in grain-size distribution along the plume and, therefore, for better characterization of source conditions of ash dispersal models. The new model is based on the method of moments, which allows for a description of the pyroclastic mixture dynamics not only in the spatial domain but also in the space of parameters of the continuous size distribution of the particles. This is achieved by formulation of fundamental transport equations for the multi-particle mixture with respect to the different moments of the grain-size distribution. Different formulations, in terms of the distribution of the particle number, as well as of the mass distribution expressed in terms of the Krumbein log scale, are also derived. Comparison between the new moments-based formulation and the classical approach, based on the discretization of the mixture in N discrete phases, shows that the new model allows for the same results to be obtained with a significantly lower computational cost (particularly when a large number of discrete phases is adopted). Application of the new model, coupled with uncertainty quantification and global sensitivity analyses, enables the investigation of the response of four key output variables (mean and standard deviation of the grain-size distribution at the top of the plume, plume height and amount of mass lost by the plume during the ascent) to changes in the main input parameters (mean and standard deviation) characterizing the pyroclastic mixture at the base of the plume. Results show that, for the range of parameters investigated and without considering interparticle processes such as aggregation or comminution, the grain-size distribution at the top of the plume is remarkably similar to that at the base and that the plume height is only weakly affected by the parameters of the grain distribution. The adopted approach can be potentially extended to the consideration of key particle-particle effects occurring in the plume including particle aggregation and fragmentation.
Acoustic response variability in automotive vehicles
NASA Astrophysics Data System (ADS)
Hills, E.; Mace, B. R.; Ferguson, N. S.
2009-03-01
A statistical analysis of a series of measurements of the audio-frequency response of a large set of automotive vehicles is presented: a small hatchback model with both a three-door (411 vehicles) and five-door (403 vehicles) derivative and a mid-sized family five-door car (316 vehicles). The sets included vehicles of various specifications, engines, gearboxes, interior trim, wheels and tyres. The tests were performed in a hemianechoic chamber with the temperature and humidity recorded. Two tests were performed on each vehicle and the interior cabin noise measured. In the first, the excitation was acoustically induced by sets of external loudspeakers. In the second test, predominantly structure-borne noise was induced by running the vehicle at a steady speed on a rough roller. For both types of excitation, it is seen that the effects of temperature are small, indicating that manufacturing variability is larger than that due to temperature for the tests conducted. It is also observed that there are no significant outlying vehicles, i.e. there are at most only a few vehicles that consistently have the lowest or highest noise levels over the whole spectrum. For the acoustically excited tests, measured 1/3-octave noise reduction levels typically have a spread of 5 dB or so and the normalised standard deviation of the linear data is typically 0.1 or higher. Regarding the statistical distribution of the linear data, a lognormal distribution is a somewhat better fit than a Gaussian distribution for lower 1/3-octave bands, while the reverse is true at higher frequencies. For the distribution of the overall linear levels, a Gaussian distribution is generally the most representative. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the acoustically induced airborne cabin noise is best described by a Gaussian distribution with a normalised standard deviation between 0.09 and 0.145. There is generally considerable variability in the roller-induced noise, with individual 1/3-octave levels varying by typically 15 dB or so and with the normalised standard deviation being in the range 0.2-0.35 or more. These levels are strongly affected by wheel rim and tyre constructions. For vehicles with nominally identical wheel rims and tyres, the normalised standard deviation for 1/3-octave levels in the frequency range 40-600 Hz is 0.2 or so. The distribution of the linear roller-induced noise level in each 1/3-octave frequency band is well described by a lognormal distribution as is the overall level. As a simple description of the response variability, it is sufficient for this series of measurements to assume that the roller-induced road noise is best described by a lognormal distribution with a normalised standard deviation of 0.2 or so, but that this can be significantly affected by the tyre and rim type, especially at lower frequencies.
Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-08
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.
Beam uniformity of flat top lasers
NASA Astrophysics Data System (ADS)
Chang, Chao; Cramer, Larry; Danielson, Don; Norby, James
2015-03-01
Many beams that output from standard commercial lasers are multi-mode, with each mode having a different shape and width. They show an overall non-homogeneous energy distribution across the spot size. There may be satellite structures, halos and other deviations from beam uniformity. However, many scientific, industrial and medical applications require flat top spatial energy distribution, high uniformity in the plateau region, and complete absence of hot spots. Reliable standard methods for the evaluation of beam quality are of great importance. Standard methods are required for correct characterization of the laser for its intended application and for tight quality control in laser manufacturing. The International Organization for Standardization (ISO) has published standard procedures and definitions for this purpose. These procedures have not been widely adopted by commercial laser manufacturers. This is due to the fact that they are unreliable because an unrepresentative single-pixel value can seriously distort the result. We hereby propose a metric of beam uniformity, a way of beam profile visualization, procedures to automatically detect hot spots and beam structures, and application examples in our high energy laser production.
YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuiver, M.; Deevey, E.S.
1961-01-01
Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less
The Radiological Physics Center's standard dataset for small field size output factors.
Followill, David S; Kry, Stephen F; Qin, Lihong; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Aguirre, Jose Francisco; Ibbott, Geoffrey S
2012-08-08
Delivery of accurate intensity-modulated radiation therapy (IMRT) or stereotactic radiotherapy depends on a multitude of steps in the treatment delivery process. These steps range from imaging of the patient to dose calculation to machine delivery of the treatment plan. Within the treatment planning system's (TPS) dose calculation algorithm, various unique small field dosimetry parameters are essential, such as multileaf collimator modeling and field size dependence of the output. One of the largest challenges in this process is determining accurate small field size output factors. The Radiological Physics Center (RPC), as part of its mission to ensure that institutions deliver comparable and consistent radiation doses to their patients, conducts on-site dosimetry review visits to institutions. As a part of the on-site audit, the RPC measures the small field size output factors as might be used in IMRT treatments, and compares the resulting field size dependent output factors to values calculated by the institution's treatment planning system (TPS). The RPC has gathered multiple small field size output factor datasets for X-ray energies ranging from 6 to 18 MV from Varian, Siemens and Elekta linear accelerators. These datasets were measured at 10 cm depth and ranged from 10 × 10 cm(2) to 2 × 2 cm(2). The field sizes were defined by the MLC and for the Varian machines the secondary jaws were maintained at a 10 × 10 cm(2). The RPC measurements were made with a micro-ion chamber whose volume was small enough to gather a full ionization reading even for the 2 × 2 cm(2) field size. The RPC-measured output factors are tabulated and are reproducible with standard deviations (SD) ranging from 0.1% to 1.5%, while the institutions' calculated values had a much larger SD range, ranging up to 7.9% [corrected].The absolute average percent differences were greater for the 2 × 2 cm(2) than for the other field sizes. The RPC's measured small field output factors provide institutions with a standard dataset against which to compare their TPS calculated values. Any discrepancies noted between the standard dataset and calculated values should be investigated with careful measurements and with attention to the specific beam model.
Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio
2014-06-01
Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Community covariates of malnutrition based mortality among older adults.
Lee, Matthew R; Berthelot, Emily R
2010-05-01
The purpose of this study was to identify community level covariates of malnutrition-based mortality among older adults. A community level framework was delineated which explains rates of malnutrition-related mortality among older adults as a function of community levels of socioeconomic disadvantage, disability, and social isolation among members of this group. County level data on malnutrition mortality of people 65 years of age and older for the period 2000-2003 were drawn from the CDC WONDER system databases. County level measures of older adult socioeconomic disadvantage, disability, and social isolation were derived from the 2000 US Census of Population and Housing. Negative binomial regression models adjusting for the size of the population at risk, racial composition, urbanism, and region were estimated to assess the relationships among these indicators. Results from negative binomial regression analysis yielded the following: a standard deviation increase in socioeconomic/physical disadvantage was associated with a 12% increase in the rate of malnutrition mortality among older adults (p < 0.001), whereas a standard deviation increase in social isolation was associated with a 5% increase in malnutrition mortality among older adults (p < 0.05). Community patterns of malnutrition based mortality among older adults are partly a function of levels of socioeconomic and physical disadvantage and social isolation among older adults. 2010 Elsevier Inc. All rights reserved.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
A tale of two climbers: hypothermia, death, and survival on Mount Everest.
Moore, G W Kent; Semple, John L
2012-03-01
Hypothermia is an acknowledged risk for those who venture into high altitude regions. There is however little quantitative information on this risk that can be used to implement mitigation strategies. Here we provide an analysis of the meteorological and hypothermic risk parameters, wind chill temperature, and facial frostbite time, during the spring 2006 Mount Everest climbing season. This season was marked by two high profile events where a solo climber was forced to spend the night in highly exposed conditions near the summit. One climber survived, while the other did not. Although this retrospective examination of two individual cases has admittedly a small sample size, and there are other factors that undoubtedly contributed to the difference in outcomes, we show that wind chill temperature and facial frostbite time experienced by the two climbers were dramatically different. In particular, the climber who did not survive experienced conditions that were approximately one standard deviation more severe that usual for that time of the year; while the climber who survived experienced conditions that were approximately one standard deviation less severe then usual. This suggests that the environmental conditions associated with hypothermia played an important role in the outcomes. This report confirms the importance of providing quantitative guidance to climbers as the risk of hypothermia on high mountains.
Verification of micro-scale photogrammetry for smooth three-dimensional object measurement
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard
2017-05-01
By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval.
HU deviation in lung and bone tissues: Characterization and a corrective strategy.
Ai, Hua A; Meier, Joseph G; Wendt, Richard E
2018-05-01
In the era of precision medicine, quantitative applications of x-ray Computed Tomography (CT) are on the rise. These require accurate measurement of the CT number, also known as the Hounsfield Unit. In this study, we evaluated the effect of patient attenuation-induced beam hardening of the x-ray spectrum on the accuracy of the HU values and a strategy to correct for the resulting deviations in the measured HU values. A CIRS electron density phantom was scanned on a Siemens Biograph mCT Flow CT scanner and a GE Discovery 710 CT scanner using standard techniques that are employed in the clinic to assess the HU deviation caused by beam hardening in different tissue types. In addition, an anthropomorphic ATOM adult male upper torso phantom was scanned on the GE Discovery 710 scanner. Various amounts of Superflab bolus material were wrapped around the phantoms to simulate different patient sizes. The mean HU values that were measured in the phantoms were evaluated as a function of the water-equivalent area (A w ), a parameter that is described in the report of AAPM Task Group 220. A strategy by which to correct the HU values was developed and tested. The variation in the HU values in the anthropomorphic ATOM phantom under different simulated body sizes, both before and after correction, were compared, with a focus on the lung and bone tissues. Significant HU deviations that depended on the simulated patient size were observed. A positive correlation between HU and A w was observed for tissue types that have an HU of less than zero, while a negative correlation was observed for tissue types with HU values that are greater than zero. The magnitude of the difference increases as the underlying attenuation property deviates further away from that of water. In the electron density phantom study, the maximum observed HU differences between the measured and reference values in the cortical bone and lung materials were 426 and 94 HU, respectively. In the anthropomorphic phantom study, the HU difference was as much as -136.7 ± 8.2 HU (or -7.6% ± 0.5% of the attenuation coefficient, AC) in the spine region, and up to 37.6 ± 1.6 HU (or 17.3% ± 0.8% of AC) in the lung region between scenarios that simulated normal and obese patients. Our HU correction method reduced the HU deviations to 8.5 ± 9.1 HU (or 0.5% ± 0.5%) for bone and to -6.4 ± 1.7 HU (or -3.0% ± 0.8%) for lung. The HU differences in the soft tissue materials before and after the correction were insignificant. Visual improvement of the tissue contrast was also achieved in the data of the simulated obese patient. The effect of a patient's size on the HU values of lung and bone tissues can be significant. The accuracy of those HU values was substantially improved by the correction method that was developed for and employed in this study. © 2018 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
Kos, Gregor; Lohninger, Hans; Mizaikoff, Boris; Krska, Rudolf
2007-07-01
A sample preparation procedure for the determination of deoxynivalenol (DON) using attenuated total reflection mid-infrared spectroscopy is presented. Repeatable spectra were obtained from samples featuring a narrow particle size distribution. Samples were ground with a centrifugal mill and analysed with an analytical sieve shaker. Particle sizes of <100, 100-250, 250-500, 500-710 and 710-1000 microm were obtained. Repeatability, classification and quantification abilities for DON were compared with non-sieved samples. The 100-250 microm fraction showed the best repeatability. The relative standard deviation of spectral measurements improved from 20 to 4.4% and 100% of sieved samples were correctly classified compared with 79% of non-sieved samples. The DON level in analysed fractions was a good estimate of overall toxin content.
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Sand dune ridge alignment effects on surface BRF over the Libya-4 CEOS calibration site.
Govaerts, Yves M
2015-02-03
The Libya-4 desert area, located in the Great Sand Sea, is one of the most important bright desert CEOS pseudo-invariant calibration sites by its size and radiometric stability. This site is intensively used for radiometer drift monitoring, sensor intercalibration and as an absolute calibration reference based on simulated radiances traceable to the SI standard. The Libya-4 morphology is composed of oriented sand dunes shaped by dominant winds. The effects of sand dune spatial organization on the surface bidirectional reflectance factor is analyzed in this paper using Raytran, a 3D radiative transfer model. The topography is characterized with the 30 m resolution ASTER digital elevation model. Four different regions-of-interest sizes, ranging from 10 km up to 100 km, are analyzed. Results show that sand dunes generate more backscattering than forward scattering at the surface. The mean surface reflectance averaged over different viewing and illumination angles is pretty much independent of the size of the selected area, though the standard deviation differs. Sun azimuth position has an effect on the surface reflectance field, which is more pronounced for high Sun zenith angles. Such 3D azimuthal effects should be taken into account to decrease the simulated radiance uncertainty over Libya-4 below 3% for wavelengths larger than 600 nm.
Depression and Oxidative Stress: Results From a Meta-Analysis of Observational Studies
Palta, Priya; Samuel, Laura J.; Miller, Edgar R.; Szanton, Sarah L.
2014-01-01
Objective To perform a systematic review and meta-analysis that quantitatively tests and summarizes the hypothesis that depression results in elevated oxidative stress and lower antioxidant levels. Methods We performed a meta-analysis of studies that reported an association between depression and oxidative stress and/or antioxidant status markers. PubMed and EMBASE databases were searched for articles published from January 1980 through December 2012. A random-effects model, weighted by inverse variance, was performed to pool standard deviation (Cohen’s d) effect size estimates across studies for oxidative stress and antioxidant status measures, separately. Results Twenty-three studies with 4980 participants were included in the meta-analysis. Depression was most commonly measured using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria. A Cohen’s d effect size of 0.55 (95% confidence interval = 0.47–0.63) was found for the association between depression and oxidative stress, indicating a roughly 0.55 of 1-standard-deviation increase in oxidative stress among individuals with depression compared with those without depression. The results of the studies displayed significant heterogeneity (I2 = 80.0%, p < .001). A statistically significant effect was also observed for the association between depression and antioxidant status markers (Cohen’s d = −0.24, 95% confidence interval = −0.33 to −0.15). Conclusions This meta-analysis observed an association between depression and oxidative stress and antioxidant status across many different studies. Differences in measures of depression and markers of oxidative stress and antioxidant status markers could account for the observed heterogeneity. These findings suggest that well-established associations between depression and poor heath outcomes may be mediated by high oxidative stress. PMID:24336428
Depression and oxidative stress: results from a meta-analysis of observational studies.
Palta, Priya; Samuel, Laura J; Miller, Edgar R; Szanton, Sarah L
2014-01-01
To perform a systematic review and meta-analysis that quantitatively tests and summarizes the hypothesis that depression results in elevated oxidative stress and lower antioxidant levels. We performed a meta-analysis of studies that reported an association between depression and oxidative stress and/or antioxidant status markers. PubMed and EMBASE databases were searched for articles published from January 1980 through December 2012. A random-effects model, weighted by inverse variance, was performed to pool standard deviation (Cohen's d) effect size estimates across studies for oxidative stress and antioxidant status measures, separately. Twenty-three studies with 4980 participants were included in the meta-analysis. Depression was most commonly measured using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria. A Cohen's d effect size of 0.55 (95% confidence interval = 0.47-0.63) was found for the association between depression and oxidative stress, indicating a roughly 0.55 of 1-standard-deviation increase in oxidative stress among individuals with depression compared with those without depression. The results of the studies displayed significant heterogeneity (I(2) = 80.0%, p < .001). A statistically significant effect was also observed for the association between depression and antioxidant status markers (Cohen's d = -0.24, 95% confidence interval = -0.33 to -0.15). This meta-analysis observed an association between depression and oxidative stress and antioxidant status across many different studies. Differences in measures of depression and markers of oxidative stress and antioxidant status markers could account for the observed heterogeneity. These findings suggest that well-established associations between depression and poor heath outcomes may be mediated by high oxidative stress.
Modeling and monitoring of tooth fillet crack growth in dynamic simulation of spur gear set
NASA Astrophysics Data System (ADS)
Guilbault, Raynald; Lalonde, Sébastien; Thomas, Marc
2015-05-01
This study integrates a linear elastic fracture mechanics analysis of the tooth fillet crack propagation into a nonlinear dynamic model of spur gear sets. An original formulation establishes the rigidity of sound and damaged teeth. The formula incorporates the contribution of the flexible gear body and real crack trajectories in the fillet zone. The work also develops a KI prediction formula. A validation of the equation estimates shows that the predicted KI are in close agreement with published numerical and experimental values. The representation also relies on the Paris-Erdogan equation completed with crack closure effects. The analysis considers that during dN fatigue cycles, a harmonic mean of ΔK assures optimal evaluations. The paper evaluates the influence of the mesh frequency distance from the resonances of the system. The obtained results indicate that while the dependence may demonstrate obvious nonlinearities, the crack progression rate increases with a mesh frequency augmentation. The study develops a tooth fillet crack propagation detection procedure based on residual signals (RS) prepared in the frequency domain. The proposed approach accepts any gear conditions as reference signature. The standard deviation and mean values of the RS are evaluated as gear condition descriptors. A trend tracking of their responses obtained from a moving linear regression completes the analysis. Globally, the results show that, regardless of the reference signal, both descriptors are sensitive to the tooth fillet crack and sharply react to tooth breakage. On average, the mean value detected the crack propagation after a size increase of 3.69 percent as compared to the reference condition, whereas the standard deviation required crack progressions of 12.24 percent. Moreover, the mean descriptor shows evolutions closer to the crack size progression.
Hubbard, Logan; Lipinski, Jerry; Ziemer, Benjamin; Malkasian, Shant; Sadeghi, Bahman; Javan, Hanna; Groves, Elliott M; Dertli, Brian; Molloi, Sabee
2018-01-01
Purpose To retrospectively validate a first-pass analysis (FPA) technique that combines computed tomographic (CT) angiography and dynamic CT perfusion measurement into one low-dose examination. Materials and Methods The study was approved by the animal care committee. The FPA technique was retrospectively validated in six swine (mean weight, 37.3 kg ± 7.5 [standard deviation]) between April 2015 and October 2016. Four to five intermediate-severity stenoses were generated in the left anterior descending artery (LAD), and 20 contrast material-enhanced volume scans were acquired per stenosis. All volume scans were used for maximum slope model (MSM) perfusion measurement, but only two volume scans were used for FPA perfusion measurement. Perfusion measurements in the LAD, left circumflex artery (LCx), right coronary artery, and all three coronary arteries combined were compared with microsphere perfusion measurements by using regression, root-mean-square error, root-mean-square deviation, Lin concordance correlation, and diagnostic outcomes analysis. The CT dose index and size-specific dose estimate per two-volume FPA perfusion measurement were also determined. Results FPA and MSM perfusion measurements (P FPA and P MSM ) in all three coronary arteries combined were related to reference standard microsphere perfusion measurements (P MICRO ), as follows: P FPA_COMBINED = 1.02 P MICRO_COMBINED + 0.11 (r = 0.96) and P MSM_COMBINED = 0.28 P MICRO_COMBINED + 0.23 (r = 0.89). The CT dose index and size-specific dose estimate per two-volume FPA perfusion measurement were 10.8 and 17.8 mGy, respectively. Conclusion The FPA technique was retrospectively validated in a swine model and has the potential to be used for accurate, low-dose vessel-specific morphologic and physiologic assessment of coronary artery disease. © RSNA, 2017.
Guard, Jean; Rothrock, Michael J; Shah, Devendra H; Jones, Deana R; Gast, Richard K; Sanchez-Ingunza, Roxana; Madsen, Melissa; El-Attrache, John; Lungu, Bwalya
Phenotype microarrays were analyzed for 51 datasets derived from Salmonella enterica. The top 4 serotypes associated with poultry products and one associated with turkey, respectively Typhimurium, Enteritidis, Heidelberg, Infantis and Senftenberg, were represented. Datasets were partitioned initially into two clusters based on ranking by values at pH 4.5 (PM10 A03). Negative control wells were used to establish 90 respiratory units as the point differentiating acid resistance from sensitive strains. Thus, 24 isolates that appeared most acid-resistant were compared initially to 27 that appeared most acid-sensitive (24 × 27 format). Paired cluster analysis was also done and it included the 7 most acid-resistant and -sensitive datasets (7 × 7 format). Statistical analyses of ranked data were then calculated in order of standard deviation, probability value by the Student's t-test and a measure of the magnitude of difference called effect size. Data were reported as significant if, by order of filtering, the following parameters were calculated: i) a standard deviation of 24 respiratory units or greater from all datasets for each chemical, ii) a probability value of less than or equal to 0.03 between clusters and iii) an effect size of at least 0.50 or greater between clusters. Results suggest that between 7.89% and 23.16% of 950 chemicals differentiated acid-resistant isolates from sensitive ones, depending on the format applied. Differences were more evident at the extremes of phenotype using the subset of data in the paired 7 × 7 format. Results thus provide a strategy for selecting compounds for additional research, which may impede the emergence of acid-resistant Salmonella enterica in food. Published by Elsevier Masson SAS.
Evidence for repetitive load in the trapezius muscle during a tapping task.
Tomatis, L; Müller, C; Nakaseko, M; Läubli, T
2012-08-01
Many studies describe the trapezius muscle activation pattern during repetitive key-tapping focusing on continuous activation. The objectives of this study were to determine whether the upper trapezius is phasically active during supported key tapping, whether this activity is cross-correlated with forearm muscle activity, and whether trapezius activity depends on key characteristic. Thirteen subjects (29.7 ± 11.4 years) were tested. Surface EMG of the finger's extensor and flexor and of the trapezius muscles, as well as the key on-off signal was recorded while the subject performed a 2-min session of key tapping at 4 Hz. The linear envelopes obtained were cut into single tapping cycles extending from one onset to the next onset signal and subsequently time-normalized. Effect size between mean range and maximal standard deviation was calculated to determine as to whether a burst of trapezius muscle activation was present. Cross-correlation was used to determine the time-lag of the activity bursts between forearm and trapezius muscles. For each person the mean and standard deviation of the cross-correlations coefficient between forearm muscles and trapezius were determined. Results showed a burst of activation in the trapezius muscle during most of the tapping cycles. The calculated effect size was ≥0.5 in 67% of the cases. Cross-correlation factors between forearm and trapezius muscle activity were between 0.75 and 0.98 for both extensor and flexor muscles. The cross-correlated phasic trapezius activity did not depend on key characteristics. Trapezius muscle was dynamically active during key tapping; its activity was clearly correlated with forearm muscles' activity.
Floodplain complexity and surface metrics: influences of scale and geomorphology
Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.
2015-01-01
Many studies of fluvial geomorphology and landscape ecology examine a single river or landscape, thus lack generality, making it difficult to develop a general understanding of the linkages between landscape patterns and larger-scale driving variables. We examined the spatial complexity of eight floodplain surfaces in widely different geographic settings and determined how patterns measured at different scales relate to different environmental drivers. Floodplain surface complexity is defined as having highly variable surface conditions that are also highly organised in space. These two components of floodplain surface complexity were measured across multiple sampling scales from LiDAR-derived DEMs. The surface character and variability of each floodplain were measured using four surface metrics; namely, standard deviation, skewness, coefficient of variation, and standard deviation of curvature from a series of moving window analyses ranging from 50 to 1000 m in radius. The spatial organisation of each floodplain surface was measured using spatial correlograms of the four surface metrics. Surface character, variability, and spatial organisation differed among the eight floodplains; and random, fragmented, highly patchy, and simple gradient spatial patterns were exhibited, depending upon the metric and window size. Differences in surface character and variability among the floodplains became statistically stronger with increasing sampling scale (window size), as did their associations with environmental variables. Sediment yield was consistently associated with differences in surface character and variability, as were flow discharge and variability at smaller sampling scales. Floodplain width was associated with differences in the spatial organization of surface conditions at smaller sampling scales, while valley slope was weakly associated with differences in spatial organisation at larger scales. A comparison of floodplain landscape patterns measured at different scales would improve our understanding of the role that different environmental variables play at different scales and in different geomorphic settings.
NASA Astrophysics Data System (ADS)
Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.
2012-03-01
Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.
Investigation of imaging properties for submillimeter rectangular pinholes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Dan, E-mail: dxia@uchicago.edu; Moore, Stephen C., E-mail: scmoore@bwh.harvard.edu, E-mail: miaepark@bwh.harvard.edu, E-mail: mcervo@bwh.harvard.edu; Park, Mi-Ae, E-mail: scmoore@bwh.harvard.edu, E-mail: miaepark@bwh.harvard.edu, E-mail: mcervo@bwh.harvard.edu
Purpose: Recently, a multipinhole collimator with inserts that have both rectangular apertures and rectangular fields of view (FOVs) has been proposed for SPECT imaging since it can tile the projection onto the detector efficiently and the FOVs in transverse and axial directions become separable. The purpose of this study is to investigate the image properties of rectangular-aperture pinholes with submillimeter apertures sizes. Methods: In this work, the authors have conducted sensitivity and FOV experiments for 18 replicates of a prototype insert fabricated in platinum/iridium (Pt/Ir) alloy with submillimeter square-apertures. A sin{sup q}θ fit to the experimental sensitivity has been performedmore » for these inserts. For the FOV measurement, the authors have proposed a new formula to calculate the projection intensity of a flood image on the detector, taking into account the penumbra effect. By fitting this formula to the measured projection data, the authors obtained the acceptance angles. Results: The mean (standard deviation) of fitted sensitivity exponents q and effective edge lengths w{sub e} were, respectively, 10.8 (1.8) and 0.38 mm (0.02 mm), which were close to the values, 7.84 and 0.396 mm, obtained from Monte Carlo calculations using the parameters of the designed inserts. For the FOV measurement, the mean (standard deviation) of the transverse and axial acceptances were 35.0° (1.2°) and 30.5° (1.6°), which are in good agreement with the designed values (34.3° and 29.9°). Conclusions: These results showed that the physical properties of the fabricated inserts with submillimeter aperture size matched our design well.« less
The MODIS Aerosol Algorithm, Products and Validation
NASA Technical Reports Server (NTRS)
Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Mattoo, S.; Chu, D. A.; Martins, J. V.; Li, R.-R.; Ichoku, C.; Levy, R. C.; Kleidman, R. G.
2003-01-01
The MODerate resolution Imaging Spectroradiometer (MODIS) aboard both NASA's Terra and Aqua satellites is making near global daily observations of the earth in a wide spectral range. These measurements are used to derive spectral aerosol optical thickness and aerosol size parameters over both land and ocean. The aerosol products available over land include aerosol optical thickness at three visible wavelengths, a measure of the fraction of aerosol optical thickness attributed to the fine mode and several derived parameters including reflected spectral solar flux at top of atmosphere. Over ocean, the aerosol optical thickness is provided in seven wavelengths from 0.47 microns to 2.13 microns. In addition, quantitative aerosol size information includes effective radius of the aerosol and quantitative fraction of optical thickness attributed to the fine mode. Spectral aerosol flux, mass concentration and number of cloud condensation nuclei round out the list of available aerosol products over the ocean. The spectral optical thickness and effective radius of the aerosol over the ocean are validated by comparison with two years of AERONET data gleaned from 133 AERONET stations. 8000 MODIS aerosol retrievals colocated with AERONET measurements confirm that one-standard deviation of MODIS optical thickness retrievals fall within the predicted uncertainty of delta tauapproximately equal to plus or minus 0.03 plus or minus 0.05 tau over ocean and delta tay equal to plus or minus 0.05 plus or minus 0.15 tau over land. 271 MODIS aerosol retrievals co-located with AERONET inversions at island and coastal sites suggest that one-standard deviation of MODIS effective radius retrievals falls within delta r_eff approximately equal to 0.11 microns. The accuracy of the MODIS retrievals suggests that the product can be used to help narrow the uncertainties associated with aerosol radiative forcing of global climate.
Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K
2018-03-01
The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.
SU-F-I-47: Optimizing Protocols for Image Quality and Dose in Abdominal CT of Large Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, L; Yester, M
Purpose: Newer CT scanners are able to use scout views to adjust mA throughout the scan in order to achieve a given noise level. However, given constraints of radiologist preferences for kVp and rotation time, it may not be possible to achieve an acceptable noise level for large patients. A study was initiated to determine for which patients kVp and/or rotation time should be changed in order to achieve acceptable image quality. Methods: Patient scans were reviewed on two new Emergency Department scanners (Philips iCT) to identify patients over a large range of sizes. These iCTs were set with amore » limit of 500 mA to safeguard against a failure that might cause a CT scan to be (incorrectly) obtained at too-high mA. Scout views of these scans were assessed for both AP and LAT patient width and AP and LAT standard deviation in an ROI over the liver. Effective diameter and product of the scout standard deviations over the liver were both studied as possible metrics for identifying patients who would need kVp and/or rotation time changed. The mA used for the liver in the CT was compared to these metrics for those patients whose CT scans showed acceptable image quality. Results: Both effective diameter and product of the scout standard deviations over the liver result in similar predictions for which patients will require the kVp and/or rotation time to be changed to achieve an optimal combination of image quality and dose. Conclusion: Two mechanisms for CT technologists to determine based on scout characteristics what kVp, mA limit, and rotation time to use when DoseRight with our physicians’ preferred kVp and rotation time will not yield adequate image quality are described.« less
Gale, Catharine R; Cooper, Rachel; Craig, Leone; Elliott, Jane; Kuh, Diana; Richards, Marcus; Starr, John M; Whalley, Lawrence J; Deary, Ian J
2012-01-01
Poorer cognitive ability in youth is a risk factor for later mental health problems but it is largely unknown whether cognitive ability, in youth or in later life, is predictive of mental wellbeing. The purpose of this study was to investigate whether cognitive ability at age 11 years, cognitive ability in later life, or lifetime cognitive change are associated with mental wellbeing in older people. We used data on 8191 men and women aged 50 to 87 years from four cohorts in the HALCyon collaborative research programme into healthy ageing: the Aberdeen Birth Cohort 1936, the Lothian Birth Cohort 1921, the National Child Development Survey, and the MRC National Survey for Health and Development. We used linear regression to examine associations between cognitive ability at age 11, cognitive ability in later life, and lifetime change in cognitive ability and mean score on the Warwick Edinburgh Mental Wellbeing Scale and meta-analysis to obtain an overall estimate of the effect of each. People whose cognitive ability at age 11 was a standard deviation above the mean scored 0.53 points higher on the mental wellbeing scale (95% confidence interval 0.36, 0.71). The equivalent value for cognitive ability in later life was 0.89 points (0.72, 1.07). A standard deviation improvement in cognitive ability in later life relative to childhood ability was associated with 0.66 points (0.39, 0.93) advantage in wellbeing score. These effect sizes equate to around 0.1 of a standard deviation in mental wellbeing score. Adjustment for potential confounding and mediating variables, primarily the personality trait neuroticism, substantially attenuated these associations. Associations between cognitive ability in childhood or lifetime cognitive change and mental wellbeing in older people are slight and may be confounded by personality trait differences.
Ahmad, Zaheer; Lim, Zek; Roman, Kevin; Haw, Marcus; Anderson, Robert H; Vettukattil, Joseph
2016-02-01
Multiplanar re-formatting of full-volume three-dimensional echocardiography data sets offers new insights into the morphology of atrioventricular septal defects. We hypothesised that distortion of the alignment between the atrial and ventricular septums results in imbalanced venous return to the ventricles, with consequent proportional ventricular hypoplasia. A single observer evaluated 31 patients, with a mean age of 52.09 months, standard deviation of 55, and with a range from 2 to 264 months, with atrioventricular septal defects, of whom 17 were boys. Ventricular imbalance, observed in nine patients, was determined by two-dimensional assessment, and confirmed at surgical inspection in selected cases when a univentricular strategy was undertaken. Offline analysis using multiplanar re-formatting was performed. A line was drawn though the length of the ventricular septum and a second line along the plane of the atrial septum, taking the angle between these two lines as the atrioventricular septal angle. We compared the angle between 22 patients with adequately sized ventricles, and those with ventricular imbalance undergoing univentricular repair. In the 22 patients undergoing biventricular repair, the septal angle was 0 in 14 patients; the other eight patients having angles ranging from 1 to 36, with a mean angle of 7.4°, and standard deviation of 11.1°.The mean angle in the nine patients with ventricle imbalance was 28.6°, with a standard deviation of 3.04°, and with a range from 26 to 35°. Of those undergoing univentricular repair, two patients died, with angles of 26 and 30°, respectively. The atrioventricular septal angle derived via multiplanar formatting gives important information regarding the degree of ventricular hypoplasia and imbalance. When this angle is above 25°, patients are likely to have ventricular imbalance requiring univentricular repair.
Selection and Classification Using a Forecast Applicant Pool.
ERIC Educational Resources Information Center
Hendrix, William H.
The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Bulluck, Heerajnarain; Hammond-Haley, Matthew; Weinmann, Shane; Martinez-Macias, Roberto; Hausenloy, Derek J
2017-03-01
The aim of this study was to review randomized controlled trials (RCTs) using cardiac magnetic resonance (CMR) to assess myocardial infarct (MI) size in reperfused patients with ST-segment elevation myocardial infarction (STEMI). There is limited guidance on the use of CMR in clinical cardioprotection RCTs in patients with STEMI treated by primary percutaneous coronary intervention. All RCTs in which CMR was used to quantify MI size in patients with STEMI treated with primary percutaneous coronary intervention were identified and reviewed. Sixty-two RCTs (10,570 patients, January 2006 to November 2016) were included. One-third did not report CMR vendor or scanner strength, the contrast agent and dose used, and the MI size quantification technique. Gadopentetate dimeglumine was most commonly used, followed by gadoterate meglumine and gadobutrol at 0.20 mmol/kg each, with late gadolinium enhancement acquired at 10 min; in most RCTs, MI size was quantified manually, followed by the 5 standard deviation threshold; dropout rates were 9% for acute CMR only and 16% for paired acute and follow-up scans. Weighted mean acute and chronic MI sizes (≤12 h, initial TIMI [Thrombolysis in Myocardial Infarction] flow grade 0 to 3) from the control arms were 21 ± 14% and 15 ± 11% of the left ventricle, respectively, and could be used for future sample-size calculations. Pre-selecting patients most likely to benefit from the cardioprotective therapy (≤6 h, initial TIMI flow grade 0 or 1) reduced sample size by one-third. Other suggested recommendations for standardizing CMR in future RCTs included gadobutrol at 0.15 mmol/kg with late gadolinium enhancement at 15 min, manual or 6-SD threshold for MI quantification, performing acute CMR at 3 to 5 days and follow-up CMR at 6 months, and adequate reporting of the acquisition and analysis of CMR. There is significant heterogeneity in RCT design using CMR in patients with STEMI. The authors provide recommendations for standardizing the assessment of MI size using CMR in future clinical cardioprotection RCTs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Monodisperse Latex Reactor (MLR): A materials processing space shuttle mid-deck payload
NASA Technical Reports Server (NTRS)
Kornfeld, D. M.
1985-01-01
The monodisperse latex reactor experiment has flown five times on the space shuttle, with three more flights currently planned. The objectives of this project is to manufacture, in the microgravity environment of space, large particle-size monodisperse polystyrene latexes in particle sizes larger and more uniform than can be manufactured on Earth. Historically it has been extremely difficult, if not impossible to manufacture in quantity very high quality monodisperse latexes on Earth in particle sizes much above several micrometers in diameter due to buoyancy and sedimentation problems during the polymerization reaction. However the MLR project has succeeded in manufacturing in microgravity monodisperse latex particles as large as 30 micrometers in diameter with a standard deviation of 1.4 percent. It is expected that 100 micrometer particles will have been produced by the completion of the the three remaining flights. These tiny, highly uniform latex microspheres have become the first material to be commercially marketed that was manufactured in space.
Pore-scale modeling of saturated permeabilities in random sphere packings.
Pan, C; Hilpert, M; Miller, C T
2001-12-01
We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.
Kakimoto, Naoya; Chindasombatjaroen, Jira; Tomita, Seiki; Shimamoto, Hiroaki; Uchiyama, Yuka; Hasegawa, Yoko; Kishino, Mitsunobu; Murakami, Shumei; Furukawa, Souhei
2013-01-01
The purpose of this study was to investigate the usefulness of computerized tomography (CT), particularly contrast-enhanced CT, in differentiation of jaw cysts and cystic-appearing tumors. We retrospectively analyzed contrast-enhanced CT images of 90 patients with odontogenic jaw cysts or cystic-appearing tumors. The lesion size and CT values were measured and the short axis to long axis (S/L) ratio, contrast enhancement (CE) ratio, and standard deviation ratio were calculated. The lesion size and the S/L ratio of keratocystic odontogenic tumors were significantly different from those of radicular cysts and follicular cysts. There were no significant differences in the CE ratio among the lesions. Multidetector CT provided diagnostic information about the size of odontogenic cysts and cystic-appearing tumors of the jaws that was related to the lesion type, but showed no relation between CE ratio and the type of these lesions. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal
2015-09-01
Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.
NASA Astrophysics Data System (ADS)
Wu, Jing; Fang, Jinghuai; Cheng, Mingfei; Gong, Xiao
2016-12-01
In our work, large-scale silver NPs (nanoparticles) are successfully synthesized on zinc foils with controllable size by regulating the temperature of the displacement reaction. Our results show that when the temperature is 70 °C, the average size of silver NPs is approximately 88 nm in diameter, and they exhibit the strongest SERS activity. The gap between nanoparticles is simultaneously regulated as near as possible, which produces abundant "hot spots" and nanogaps. Crystal violet (CV) was used as probe molecules, and the SERS signals show that the values of relative standard deviation in the intensity of the main vibration modes are less than 10%, demonstrating excellent reproducibility of the silver NPs. Furthermore, the high surface-average enhancement factor of 3.86 × 107 is achieved even when the concentration of CV is 10-7 M, which is sufficient for single-molecule detection. We believe that this low cost and rapid route would get wide applications in chemical synthesis.
Multi-Parameter Scattering Sensor and Methods
NASA Technical Reports Server (NTRS)
Greenberg, Paul S. (Inventor); Fischer, David G. (Inventor)
2016-01-01
Methods, detectors and systems detect particles and/or measure particle properties. According to one embodiment, a detector for detecting particles comprises: a sensor for receiving radiation scattered by an ensemble of particles; and a processor for determining a physical parameter for the detector, or an optimal detection angle or a bound for an optimal detection angle, for measuring at least one moment or integrated moment of the ensemble of particles, the physical parameter, or detection angle, or detection angle bound being determined based on one or more of properties (a) and/or (b) and/or (c) and/or (d) or ranges for one or more of properties (a) and/or (b) and/or (c) and/or (d), wherein (a)-(d) are the following: (a) is a wavelength of light incident on the particles, (b) is a count median diameter or other characteristic size parameter of the particle size distribution, (c) is a standard deviation or other characteristic width parameter of the particle size distribution, and (d) is a refractive index of particles.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne;
2014-01-01
We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.
Lee, Ju-Yeun; Bae, Kunho; Park, Kyung-Ah; Lyu, In Jeong; Oh, Sei Yeul
2016-01-01
The aim of this study was to investigate extraocular muscle (EOM) volume and cross-sectional area using computed tomography (CT), and to determine the relationship between EOM size and the vertical angle of deviation in thyroid eye disease (TED). Twenty-nine TED patients (58 orbits) with vertical strabismus were enrolled in the study. All patients underwent complete ophthalmic examination including prism, alternate cover, and Krimsky tests. Orbital CT scans were also performed on each patient. Digital image analysis was used to quantify superior rectus (SR) and inferior rectus (IR) muscle cross-sectional areas and volumes. Measurements were compared with those of controls. The correlation between muscle size and degree of vertical angle deviation was evaluated. The mean vertical angle of deviation was 26.2 ± 4.1 prism diopters. The TED group had a greater maximum cross-sectional area and EOM volume in the SR and IR than the control group (all p<0.001). Area and volume of the IR were correlated with the angle of deviation, but the SR alone did not show a significant correlation. The maximum cross-sectional area and volume of [Right IR + Left SR − Right SR − Left IR] was strongly correlated with the vertical angle of deviation (P<0.001). Quantitative CT of the orbit with evaluation of the area and volume of EOMs may be helpful in anticipating and monitoring vertical strabismus in TED patients. PMID:26820406
Lee, Ju-Yeun; Bae, Kunho; Park, Kyung-Ah; Lyu, In Jeong; Oh, Sei Yeul
2016-01-01
The aim of this study was to investigate extraocular muscle (EOM) volume and cross-sectional area using computed tomography (CT), and to determine the relationship between EOM size and the vertical angle of deviation in thyroid eye disease (TED). Twenty-nine TED patients (58 orbits) with vertical strabismus were enrolled in the study. All patients underwent complete ophthalmic examination including prism, alternate cover, and Krimsky tests. Orbital CT scans were also performed on each patient. Digital image analysis was used to quantify superior rectus (SR) and inferior rectus (IR) muscle cross-sectional areas and volumes. Measurements were compared with those of controls. The correlation between muscle size and degree of vertical angle deviation was evaluated. The mean vertical angle of deviation was 26.2 ± 4.1 prism diopters. The TED group had a greater maximum cross-sectional area and EOM volume in the SR and IR than the control group (all p<0.001). Area and volume of the IR were correlated with the angle of deviation, but the SR alone did not show a significant correlation. The maximum cross-sectional area and volume of [Right IR + Left SR - Right SR - Left IR] was strongly correlated with the vertical angle of deviation (P<0.001). Quantitative CT of the orbit with evaluation of the area and volume of EOMs may be helpful in anticipating and monitoring vertical strabismus in TED patients.
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
40 CFR 63.7751 - What reports must I submit and when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... deviations from any emissions limitations (including operating limit), work practice standards, or operation and maintenance requirements, a statement that there were no deviations from the emissions limitations...-of-control during the reporting period. (7) For each deviation from an emissions limitation...
Mostafaei, A; Sedgipour, M R; Sadeghi-Bazargani, H
2009-12-01
Study purpose was to compare the changes of Visual Field (VF) during laser in situ Keratomileusis (LASIK) VS photorefractive keratectomy (PRK). This randomized, double blind, study involved 54 eyes of 27 Myopia patients who underwent LASIK or PRK procedures for contralateral eyes in each patient. Using Humphrey 30-2 SITA standard, the Mean Defect (MD) and Pattern Standard Deviation (PSD) were evaluated preoperatively and three months after surgery. At the same examination optical zone size, papillary and corneal diameters were also evaluated. There was no clinically significant difference in PSD and MD measurements between treated eyes with LASIK or PRK in any zone pre and postoperatively. VF may not be affected by corneal changes induced by LASIK or PRK three months after surgery.
Vocal singing by prelingually-deafened children with cochlear implants.
Xu, Li; Zhou, Ning; Chen, Xiuwu; Li, Yongxin; Schultz, Heather M; Zhao, Xiaoyan; Han, Demin
2009-09-01
The coarse pitch information in cochlear implants might hinder the development of singing in prelingually-deafened pediatric users. In the present study, seven prelingually-deafened children with cochlear implants (5.4-12.3 years old) sang one song that was the most familiar to him or her. The control group consisted of 14 normal-hearing children (4.1-8.0 years old). The fundamental frequencies (F0) of each note in the recorded songs were extracted. The following five metrics were computed based on the reference music scores: (1) F0 contour direction of the adjacent notes, (2) F0 compression ratio of the entire song, (3) mean deviation of the normalized F0 across the notes, (4) mean deviation of the pitch intervals, and (5) standard deviation of the note duration differences. Children with cochlear implants showed significantly poorer performance in the pitch-based assessments than the normal-hearing children. No significant differences were seen between the two groups in the rhythm-based measure. Prelingually-deafened children with cochlear implants have significant deficits in singing due to their inability to manipulate pitch in the correct directions and to produce accurate pitch height. Future studies with a large sample size are warranted in order to account for the large variability in singing performance.
NASA Astrophysics Data System (ADS)
Taheriniya, Shabnam; Parhizgar, Sara Sadat; Sari, Amir Hossein
2018-06-01
To study the alumina template pore size distribution as a function of Al thin film grain size distribution, porous alumina templates were prepared by anodizing sputtered aluminum thin films. To control the grain size the aluminum samples were sputtered with the rate of 0.5, 1 and 2 Å/s and the substrate temperature was either 25, 75 or 125 °C. All samples were anodized for 120 s in 1 M sulfuric acid solution kept at 1 °C while a 15 V potential was being applied. The standard deviation value for samples deposited at room temperature but with different rates is roughly 2 nm in both thin film and porous template form but it rises to approximately 4 nm with substrate temperature. Samples with the average grain size of 13, 14, 18.5 and 21 nm respectively produce alumina templates with an average pore size of 8.5, 10, 15 and 16 nm in that order which shows the average grain size limits the average pore diameter in the resulting template. Lateral correlation length and grain boundary effect are other factors that affect the pore formation process and pore size distribution by limiting the initial current density.
Yildiz, Elvin H; Fan, Vincent C; Banday, Hina; Ramanathan, Lakshmi V; Bitra, Ratna K; Garry, Eileen; Asbell, Penny A
2009-07-01
To evaluate the repeatability and accuracy of a new tear osmometer that measures the osmolality of 0.5-microL (500-nanoliter) samples. Four standardized solutions were tested with 0.5-microL (500-nanoliter) samples for repeatability of measurements and comparability to standardized technique. Two known standard salt solutions (290 mOsm/kg H2O, 304 mOsm/kg H2O), a normal artificial tear matrix sample (306 mOsm/kg H2O), and an abnormal artificial tear matrix sample (336 mOsm/kg H2O) were repeatedly tested (n = 20 each) for osmolality with use of the Advanced Instruments Model 3100 Tear Osmometer (0.5-microL [500-nanoliter] sample size) and the FDA-approved Advanced Instruments Model 3D2 Clinical Osmometer (250-microL sample size). Four standard solutions were used, with osmolality values of 290, 304, 306, and 336 mOsm/kg H2O. The respective precision data, including the mean and standard deviation, were: 291.8 +/- 4.4, 305.6 +/- 2.4, 305.1 +/- 2.3, and 336.4 +/- 2.2 mOsm/kg H2O. The percent recoveries for the 290 mOsm/kg H2O standard solution, the 304 mOsm/kg H2O reference solution, the normal value-assigned 306 mOsm/kg H2O sample, and the abnormal value-assigned 336 mOsm/kg H2O sample were 100.3, 100.2, 99.8, and 100.3 mOsm/kg H2O, respectively. The repeatability data are in accordance with data obtained on clinical osmometers with use of larger sample sizes. All 4 samples tested on the tear osmometer have osmolality values that correlate well to the clinical instrument method. The tear osmometer is a suitable instrument for testing the osmolality of microliter-sized samples, such as tears, and therefore may be useful in diagnosing, monitoring, and classifying tear abnormalities such as the severity of dry eye disease.
Calibration of helical tomotherapy machine using EPR/alanine dosimetry.
Perichon, Nicolas; Garcia, Tristan; François, Pascal; Lourenço, Valérie; Lesven, Caroline; Bordy, Jean-Marc
2011-03-01
Current codes of practice for clinical reference dosimetry of high-energy photon beams in conventional radiotherapy recommend using a 10 x 10 cm2 square field, with the detector at a reference depth of 10 cm in water and 100 cm source to surface distance (SSD) (AAPM TG-51) or 100 cm source-to-axis distance (SAD) (IAEA TRS-398). However, the maximum field size of a helical tomotherapy (HT) machine is 40 x 5 cm2 defined at 85 cm SAD. These nonstandard conditions prevent a direct implementation of these protocols. The purpose of this study is twofold: To check the absorbed dose in water and dose rate calibration of a tomotherapy unit as well as the accuracy of the tomotherapy treatment planning system (TPS) calculations for a specific test case. Both topics are based on the use of electron paramagnetic resonance (EPR) using alanine as transfer dosimeter between the Laboratoire National Henri Becquerel (LNHB) 60Co-gamma-ray reference beam and the Institut Curie's HT beam. Irradiations performed in the LNHB reference 60Co-gamma-ray beam allowed setting up the calibration method, which was then implemented and tested at the LNHB 6 MV linac x-ray beam, resulting in a deviation of 1.6% (at a 1% standard uncertainty) relative to the reference value determined with the standard IAEA TRS-398 protocol. HT beam dose rate estimation shows a difference of 2% with the value stated by the manufacturer at a 2% standard uncertainty. A 4% deviation between measured dose and the calculation from the tomotherapy TPS was found. The latter was originated by an inadequate representation of the phantom CT-scan values and, consequently, mass densities within the phantom. This difference has been explained by the mass density values given by the CT-scan and used by the TPS which were not the true ones. Once corrected using Monte Carlo N-Particle simulations to validate the accuracy of this process, the difference between corrected TPS calculations and alanine measured dose values was then found to be around 2% (with 2% standard uncertainty on TPS doses and 1.5% standard uncertainty on EPR measurements). Beam dose rate estimation results were found to be in good agreement with the reference value given by the manufacturer at 2% standard uncertainty. Moreover, the dose determination method was set up with a deviation around 2% (at a 2% standard uncertainty).
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
Screen Twice, Cut Once: Assessing the Predictive Validity of Teacher Selection Tools
ERIC Educational Resources Information Center
Goldhaber, Dan; Grout, Cyrus; Huntington-Klein, Nick
2015-01-01
It is well documented that teachers can have profound effects on student outcomes. Empirical estimates find that a one standard deviation increase in teacher quality raises student test achievement by 10 to 25 percent of a standard deviation. More recent evidence shows that the effectiveness of teachers can affect long-term student outcomes, such…
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Parabolic trough receiver heat loss and optical efficiency round robin 2015/2016
NASA Astrophysics Data System (ADS)
Pernpeintner, Johannes; Schiricke, Björn; Sallaberry, Fabienne; de Jalón, Alberto García; López-Martín, Rafael; Valenzuela, Loreto; de Luca, Antonio; Georg, Andreas
2017-06-01
A round robin for parabolic trough receiver heat loss and optical efficiency in the laboratory was performed between five institutions using five receivers in 2015/2016. Heat loss testing was performed at three cartridge heater test benches and one Joule heating test bench in the temperature range between 100 °C and 550 °C. Optical efficiency testing was performed with two spectrometric test bench and one calorimetric test bench. Heat loss testing results showed standard deviations at the order of 6% to 12 % for most temperatures and receivers and a standard deviation of 17 % for one receiver at 100 °C. Optical efficiency is presented normalized for laboratories showing standard deviations of 0.3 % to 1.3 % depending on the receiver.
Benign positional vertigo and hyperuricaemia.
Adam, A M
2005-07-01
To find out if there is any association between serum uric acid level and positional vertigo. A prospective, case controlled study. A private neurological clinic. All patients presenting with vertigo. Ninety patients were seen in this period with 78 males and 19 females. Mean age was 47 +/- 3 years (at 95% confidence level) with a standard deviation of 12.4. Their mean uric acid level was 442 +/- 16 (at 95% confidence level) with a standard deviation of 79.6 umol/l as compared to 291 +/- 17 (at 95% confidence level) with a standard deviation of 79.7 umol/l in the control group. The P-value was less than 0.001. That there is a significant association between high uric acid and benign positional vertigo.
NASA Technical Reports Server (NTRS)
Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.
1976-01-01
The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.
Screening Samples for Arsenic by Inductively Coupled Plasma-Mass Spectrometry for Treaty Samples
2014-02-01
2.274 3.657 10.06 14.56 30.36 35.93 % RSD : 15.87% 4.375% 2.931% 4.473% 3.349% 3.788% 2.802% 3.883% 3.449% RSD , relative standard deviation 9 Table...107.9% 106.4% Standard Deviation: 0.3171 0.3498 0.8024 2.964 4.526 10.06 13.83 16.38 11.81 % RSD : 5.657% 3.174% 3.035% 5.507% 4.332% 3.795% 2.626...119.1% 116.5% 109.4% 106.8% 105.2% 105.5% 105.8% 108.6% 107.8% Standard Deviation: 0.2379 0.5595 1.173 2.375 2.798 5.973 11.79 15.10 30.54 % RSD
Kurland, Brenda F; Muzi, Mark; Peterson, Lanell M; Doot, Robert K; Wangerin, Kristen A; Mankoff, David A; Linden, Hannah M; Kinahan, Paul E
2016-02-01
Uptake time (interval between tracer injection and image acquisition) affects the SUV measured for tumors in (18)F-FDG PET images. With dissimilar uptake times, changes in tumor SUVs will be under- or overestimated. This study examined the influence of uptake time on tumor response assessment using a virtual clinical trials approach. Tumor kinetic parameters were estimated from dynamic (18)F-FDG PET scans of breast cancer patients and used to simulate time-activity curves for 45-120 min after injection. Five-minute uptake time frames followed 4 scenarios: the first was a standardized static uptake time (the SUV from 60 to 65 min was selected for all scans), the second was uptake times sampled from an academic PET facility with strict adherence to standardization protocols, the third was a distribution similar to scenario 2 but with greater deviation from standards, and the fourth was a mixture of hurried scans (45- to 65-min start of image acquisition) and frequent delays (58- to 115-min uptake time). The proportion of out-of-range scans (<50 or >70 min, or >15-min difference between paired scans) was 0%, 20%, 44%, and 64% for scenarios 1, 2, 3, and 4, respectively. A published SUV correction based on local linearity of uptake-time dependence was applied in a separate analysis. Influence of uptake-time variation was assessed as sensitivity for detecting response (probability of observing a change of ≥30% decrease in (18)F-FDG PET SUV given a true decrease of 40%) and specificity (probability of observing an absolute change of <30% given no true change). Sensitivity was 96% for scenario 1, and ranged from 73% for scenario 4 (95% confidence interval, 70%-76%) to 92% (90%-93%) for scenario 2. Specificity for all scenarios was at least 91%. Single-arm phase II trials required an 8%-115% greater sample size for scenarios 2-4 than for scenario 1. If uptake time is known, SUV correction methods may raise sensitivity to 87%-95% and reduce the sample size increase to less than 27%. Uptake-time deviations from standardized protocols occur frequently, potentially decreasing the performance of (18)F-FDG PET response biomarkers. Correcting SUV for uptake time improves sensitivity, but algorithm refinement is needed. Stricter uptake-time control and effective correction algorithms could improve power and decrease costs for clinical trials using (18)F-FDG PET endpoints. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
A deviation display method for visualising data in mobile gamma-ray spectrometry.
Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer
2010-09-01
A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.
The Effect of Viewing Eccentricity on Enumeration
Palomares, Melanie; Smith, Paul R.; Pitts, Carole Holley; Carter, Breana M.
2011-01-01
Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities. PMID:21695212
The effect of viewing eccentricity on enumeration.
Palomares, Melanie; Smith, Paul R; Pitts, Carole Holley; Carter, Breana M
2011-01-01
Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities.
Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.
2011-01-01
For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562
Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen
2011-01-01
Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.
Code of Federal Regulations, 2010 CFR
2010-07-01
... which are distinct from the standard deviation process and specific to the requirements of the Federal... agency request a deviation from the provisions of this part? 102-38.30 Section 102-38.30 Public Contracts... executive agency request a deviation from the provisions of this part? Refer to §§ 102-2.60 through 102-2...
Characterizing the forest fragmentation of Canada's national parks.
Soverel, Nicholas O; Coops, Nicholas C; White, Joanne C; Wulder, Michael A
2010-05-01
Characterizing the amount and configuration of forests can provide insights into habitat quality, biodiversity, and land use. The establishment of protected areas can be a mechanism for maintaining large, contiguous areas of forests, and the loss and fragmentation of forest habitat is a potential threat to Canada's national park system. Using the Earth Observation for Sustainable Development of Forests (EOSD) land cover product (EOSD LC 2000), we characterize the circa 2000 forest patterns in 26 of Canada's national parks and compare these to forest patterns in the ecological units surrounding these parks, referred to as the greater park ecosystem (GPE). Five landscape pattern metrics were analyzed: number of forest patches, mean forest patch size (hectare), standard deviation of forest patch size (hectare), mean forest patch perimeter-to-area ratio (meters per hectare), and edge density of forest patches (meters per hectare). An assumption is often made that forests within park boundaries are less fragmented than the surrounding GPE, as indicated by fewer forest patches, a larger mean forest patch size, less variability in forest patch size, a lower perimeter-to-area ratio, and lower forest edge density. Of the 26 national parks we analyzed, 58% had significantly fewer patches, 46% had a significantly larger mean forest patch size (23% were not significantly different), and 46% had a significantly smaller standard deviation of forest patch size (31% were not significantly different), relative to their GPEs. For forest patch perimeter-to-area ratio and forest edge density, equal proportions of parks had values that were significantly larger or smaller than their respective GPEs and no clear trend emerged. In summary, all the national parks we analyzed, with the exception of the Georgian Bay Islands, were found to be significantly different from their corresponding GPE for at least one of the five metrics assessed, and 50% of the 26 parks were significantly different from their respective GPEs for all of the metrics assessed. The EOSD LC 2000 provides a heretofore unavailable dataset for characterizing broad trends in forest fragmentation in Canada's national parks and in their surrounding GPEs. The interpretation of forest fragmentation metrics must be guided by the underlying land cover context, as many forested ecosystems in Canada are naturally fragmented due to wetlands and topography. Furthermore, interpretation must also consider the management context, as some parks are designed to preserve fragmented habitats. An analysis of forest pattern such as that described herein provides a baseline, from which changes in fragmentation patterns over time could be monitored, enabled by earth observation data.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
NASA Technical Reports Server (NTRS)
Wetzel, Peter J.; Chang, Jy-Tai
1988-01-01
Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.
Breakdown of the coherence effects and Fermi liquid behavior in YbAl3 nanoparticles
NASA Astrophysics Data System (ADS)
Echevarria-Bonet, C.; Rojas, D. P.; Espeso, J. I.; Rodríguez Fernández, J.; Rodríguez Fernández, L.; Bauer, E.; Burdin, S.; Magalhães, S. G.; Fernández Barquín, L.
2018-04-01
A change in the Kondo lattice behavior of bulk YbAl3 has been observed when the alloy is shaped into nanoparticles (≈12 nm). Measurements of the electrical resistivity show inhibited coherence effects and deviation from the standard Fermi liquid behavior (T 2-dependence). These results are interpreted as being due to the effect of the disruption of the periodicity of the array of Kondo ions provoked by the size reduction process. Additionally, the ensemble of randomly placed nanoparticles also triggers an extra source of electronic scattering at very low temperatures (≈15 K) due to quantum interference effects.
Xiao, Meng; Kong, Fanrong; Jin, Ping; Wang, Qinning; Xiao, Kelin; Jeoffreys, Neisha; James, Gregory
2012-01-01
PCR ribotyping is the most commonly used Clostridium difficile genotyping method, but its utility is limited by lack of standardization. In this study, we analyzed four published whole genomes and tested an international collection of 21 well-characterized C. difficile ribotype 027 isolates as the basis for comparison of two capillary gel electrophoresis (CGE)-based ribotyping methods. There were unexpected differences between the 16S-23S rRNA intergenic spacer region (ISR) allelic profiles of the four ribotype 027 genomes, but six bands were identified in all four and a seventh in three genomes. All seven bands and another, not identified in any of the whole genomes, were found in all 21 isolates. We compared sequencer-based CGE (SCGE) with three different primer pairs to the Qiagen QIAxcel CGE (QCGE) platform. Deviations from individual reference/consensus band sizes were smaller for SCGE (0 to 0.2 bp) than for QCGE (4.2 to 9.5 bp). Compared with QCGE, SCGE more readily distinguished bands of similar length (more discriminatory), detected bands of larger size and lower intensity (more sensitive), and assigned band sizes more accurately and reproducibly, making it more suitable for standardization. Specifically, QCGE failed to identify the largest ISR amplicon. Based on several criteria, we recommend the primer set 16S-USA/23S-USA for use in a proposed standard SCGE method. Similar differences between SCGE and QCGE were found on testing of 14 isolates of four other C. difficile ribotypes. Based on our results, ISR profiles based on accurate sequencer-based band lengths would be preferable to agarose gel-based banding patterns for the assignment of ribotypes. PMID:22692737
Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.
2009-01-01
A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Li, Hui
2009-03-01
To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.
Vavalle, Nicholas A; Jelen, Benjamin C; Moreno, Daniel P; Stitzel, Joel D; Gayzik, F Scott
2013-01-01
Objective evaluation methods of time history signals are used to quantify how well simulated human body responses match experimental data. As the use of simulations grows in the field of biomechanics, there is a need to establish standard approaches for comparisons. There are 2 aims of this study. The first is to apply 3 objective evaluation methods found in the literature to a set of data from a human body finite element model. The second is to compare the results of each method, examining how they are correlated to each other and the relative strengths and weaknesses of the algorithms. In this study, the methods proposed by Sprague and Geers (magnitude and phase error, SGM and SGP), Rhule et al. (cumulative standard deviation, CSD), and Gehre et al. (CORrelation and Analysis, or CORA, size, phase, shape, corridor) were compared. A 40 kph frontal sled test presented by Shaw et al. was simulated using the Global Human Body Models Consortium midsized male full-body finite element model (v. 3.5). Mean and standard deviation experimental data (n = 5) from Shaw et al. were used as the benchmark. Simulated data were output from the model at the appropriate anatomical locations for kinematic comparison. Force data were output at the seat belts, seat pan, knee, and foot restraints. Objective comparisons from 53 time history data channels were compared to the experimental results. To compare the different methods, all objective comparison metrics were cross-plotted and linear regressions were calculated. The following ratings were found to be statistically significantly correlated (P < .01): SGM and CORrelation and Analysis (CORA) size, R (2) = 0.73; SGP and CORA shape, R (2) = 0.82; and CSD and CORA's corridor factor, R (2) = 0.59. Relative strengths of the correlated ratings were then investigated. For example, though correlated to CORA size, SGM carries a sign to indicate whether the simulated response is greater than or less than the benchmark signal. A further analysis of the advantages and drawbacks of each method is discussed. The results demonstrate that a single metric is insufficient to provide a complete assessment of how well the simulated results match the experiments. The CORA method provided the most comprehensive evaluation of the signal. Regardless of the method selected, one primary recommendation of this work is that for any comparison, the results should be reported to provide separate assessments of a signal's match to experimental variance, magnitude, phase, and shape. Future work planned includes implementing any forthcoming International Organization for Standardization standards for objective evaluations. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.
Ultrafast image-based dynamic light scattering for nanoparticle sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Wu; Zhang, Jie; Liu, Lili
An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less
Network Structure as a Modulator of Disturbance Impacts in Streams
NASA Astrophysics Data System (ADS)
Warner, S.; Tullos, D. D.
2017-12-01
This study examines how river network structure affects the propagation of geomorphic and anthropogenic disturbances through streams. Geomorphic processes such as debris flows can alter channel morphology and modify habitat for aquatic biota. Anthropogenic disturbances such as road construction can interact with the geomorphology and hydrology of forested watersheds to change sediment and water inputs to streams. It was hypothesized that the network structure of streams within forested watersheds would influence the location and magnitude of the impacts of debris flows and road construction on sediment size and channel width. Longitudinal surveys were conducted every 50 meters for 11 kilometers of third-to-fifth order streams in the H.J. Andrews Experimental Forest in the Western Cascade Range of Oregon. Particle counts and channel geometry measurements were collected to characterize the geomorphic impacts of road crossings and debris flows as disturbances. Sediment size distributions and width measurements were plotted against the distance of survey locations through the network to identify variations in longitudinal trends of channel characteristics. Thresholds for the background variation in sediment size and channel width, based on the standard deviations of sample points, were developed for sampled stream segments characterized by location as well as geomorphic and land use history. Survey locations were classified as "disturbed" when they deviated beyond the reference thresholds in expected sediment sizes and channel widths, as well as flow-connected proximity to debris flows and road crossings. River network structure was quantified by drainage density and centrality of nodes upstream of survey locations. Drainage density and node centrality were compared between survey locations with similar channel characteristic classifications. Cluster analysis was used to assess the significance of survey location, proximity of survey location to debris flows and road crossings, drainage density and node centrality in predicting sediment size and channel width classifications for locations within the watershed. Results contribute to the understanding of susceptibility and responses of streams supporting critical habitat for aquatic species to debris flows and forest road disturbances.
Seay, Joseph F.; Gregorczyk, Karen N.; Hasselquist, Leif
2016-01-01
Abstract Influences of load carriage and inclination on spatiotemporal parameters were examined during treadmill and overground walking. Ten soldiers walked on a treadmill and overground with three load conditions (00 kg, 20 kg, 40 kg) during level, uphill (6% grade) and downhill (-6% grade) inclinations at self-selected speed, which was constant across conditions. Mean values and standard deviations for double support percentage, stride length and a step rate were compared across conditions. Double support percentage increased with load and inclination change from uphill to level walking, with a 0.4% stance greater increase at the 20 kg condition compared to 00 kg. As inclination changed from uphill to downhill, the step rate increased more overground (4.3 ± 3.5 steps/min) than during treadmill walking (1.7 ± 2.3 steps/min). For the 40 kg condition, the standard deviations were larger than the 00 kg condition for both the step rate and double support percentage. There was no change between modes for step rate standard deviation. For overground compared to treadmill walking, standard deviation for stride length and double support percentage increased and decreased, respectively. Changes in the load of up to 40 kg, inclination of 6% grade away from the level (i.e., uphill or downhill) and mode (treadmill and overground) produced small, yet statistically significant changes in spatiotemporal parameters. Variability, as assessed by standard deviation, was not systematically lower during treadmill walking compared to overground walking. Due to the small magnitude of changes, treadmill walking appears to replicate the spatiotemporal parameters of overground walking. PMID:28149338
Hopper, John L
2015-11-15
How can the "strengths" of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors-and that is how risk gradients are interpreted-so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RR(s). This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
Gender Differences in Numeracy in Indonesia: Evidence from a Longitudinal Dataset
ERIC Educational Resources Information Center
Suryadarma, Daniel
2015-01-01
This paper uses a rich longitudinal dataset to measure the evolution of the gender differences in numeracy among school-age children in Indonesia. Girls outperformed boys by 0.08 standard deviations when the sample was around 11 years old. Seven years later, the gap has widened to 0.19 standard deviations, equivalent to around 18 months of…
A Survey Data Response to the Teaching of Utility Curves and Risk Aversion
ERIC Educational Resources Information Center
Hobbs, Jeffrey; Sharma, Vivek
2011-01-01
In many finance and economics courses as well as in practice, the concept of risk aversion is reduced to the standard deviation of returns, whereby risk-averse investors prefer to minimize their portfolios' standard deviations. In reality, the concept of risk aversion is richer and more interesting than this, and can easily be conveyed through…
On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Brown, Scott
2007-01-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…
Yarazavi, Mina; Noroozian, Ebrahim
2018-02-13
A novel sol-gel coating on a stainless-steel fiber was developed for the first time for the headspace solid-phase microextraction and determination of α-bisabolol with gas chromatography and flame ionization detection. The parameters influencing the efficiency of solid-phase microextraction process, such as extraction time and temperature, pH, and ionic strength, were optimized by the experimental design method. Under optimized conditions, the linear range was between 0.0027 and 100 μg/mL. The relative standard deviations determined at 0.01 and 1.0 μg/mL concentration levels (n = 3), respectively, were as follows: intraday relative standard deviations 3.4 and 3.3%; interday relative standard deviations 5.0 and 4.3%; and fiber-to-fiber relative standard deviations 6.0 and 3.5%. The relative recovery values were 90.3 and 101.4% at 0.01 and 1.0 μg/mL spiking levels, respectively. The proposed method was successfully applied to various real samples containing α-bisabolol. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[Comparative quality measurements part 3: funnel plots].
Kottner, Jan; Lahmann, Nils
2014-02-01
Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.
NASA Technical Reports Server (NTRS)
Cohen, Warren B.; Spies, Thomas A.
1992-01-01
Relationships between spectral and texture variables derived from SPOT HRV 10 m panchromatic and Landsat TM 30 m multispectral data and 16 forest stand structural attributes is evaluated to determine the utility of satellite data for analysis of hemlock forests west of the Cascade Mountains crest in Oregon and Washington, USA. Texture of the HRV data was found to be strongly related to many of the stand attributes evaluated, whereas TM texture was weakly related to all attributes. Data analysis based on regression models indicates that both TM and HRV imagery should yield equally accurate estimates of forest age class and stand structure. It is concluded that the satellite data are a valuable source for estimation of the standard deviation of tree sizes, mean size and density of trees in the upper canopy layers, a structural complexity index, and stand age.
Constraints on large extra dimensions from the MINOS Experiment
Adamson, P.
2016-12-16
We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab’s Neutrinos at the Main Injector beam exposure of 10.56 ×10 20 protons on target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. Themore » ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than 0.45 μm at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Finally, stronger limits are obtained for nonvanishing masses.« less
Constraints on large extra dimensions from the MINOS experiment
NASA Astrophysics Data System (ADS)
Adamson, P.; Anghel, I.; Aurisano, A.; Barr, G.; Bishai, M.; Blake, A.; Bock, G. J.; Bogert, D.; Cao, S. V.; Carroll, T. J.; Castromonte, C. M.; Chen, R.; Childress, S.; Coelho, J. A. B.; Corwin, L.; Cronin-Hennessy, D.; de Jong, J. K.; de Rijck, S.; Devan, A. V.; Devenish, N. E.; Diwan, M. V.; Escobar, C. O.; Evans, J. J.; Falk, E.; Feldman, G. J.; Flanagan, W.; Frohne, M. V.; Gabrielyan, M.; Gallagher, H. R.; Germani, S.; Gomes, R. A.; Goodman, M. C.; Gouffon, P.; Graf, N.; Gran, R.; Grzelak, K.; Habig, A.; Hahn, S. R.; Hartnell, J.; Hatcher, R.; Holin, A.; Huang, J.; Hylen, J.; Irwin, G. M.; Isvan, Z.; James, C.; Jensen, D.; Kafka, T.; Kasahara, S. M. S.; Koizumi, G.; Kordosky, M.; Kreymer, A.; Lang, K.; Ling, J.; Litchfield, P. J.; Lucas, P.; Mann, W. A.; Marshak, M. L.; Mayer, N.; McGivern, C.; Medeiros, M. M.; Mehdiyev, R.; Meier, J. R.; Messier, M. D.; Miller, W. H.; Mishra, S. R.; Moed Sher, S.; Moore, C. D.; Mualem, L.; Musser, J.; Naples, D.; Nelson, J. K.; Newman, H. B.; Nichol, R. J.; Nowak, J. A.; O'Connor, J.; Orchanian, M.; Pahlka, R. B.; Paley, J.; Patterson, R. B.; Pawloski, G.; Perch, A.; Pfützner, M. M.; Phan, D. D.; Phan-Budd, S.; Plunkett, R. K.; Poonthottathil, N.; Qiu, X.; Radovic, A.; Rebel, B.; Rosenfeld, C.; Rubin, H. A.; Sail, P.; Sanchez, M. C.; Schneps, J.; Schreckenberger, A.; Schreiner, P.; Sharma, R.; Sousa, A.; Tagg, N.; Talaga, R. L.; Thomas, J.; Thomson, M. A.; Tian, X.; Timmons, A.; Todd, J.; Tognini, S. C.; Toner, R.; Torretta, D.; Tzanakos, G.; Urheim, J.; Vahle, P.; Viren, B.; Weber, A.; Webb, R. C.; White, C.; Whitehead, L.; Whitehead, L. H.; Wojcicki, S. G.; Zwaska, R.; Minos Collaboration
2016-12-01
We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab's Neutrinos at the Main Injector beam exposure of 10.56 ×1 020 protons on target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. The ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than 0.45 μ m at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Stronger limits are obtained for nonvanishing masses.
Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-01
Small fields smaller than 4×4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model‐based algorithms, X‐ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS‐Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth‐of‐dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth‐dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1×1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1×1 cm2 field showed maximum deviation, except in 6 MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower‐density materials compared to high‐density materials. PACS numbers: 87.53.Bn, 87.53.kn, 87.56.bd, 87.55.Kd, 87.56.jf PMID:26894345
Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette
2018-03-01
In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bidisperse and polydisperse suspension rheology at large solid fraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.
At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less
McKenna, D; Kadidlo, D; Sumstad, D; McCullough, J
2003-01-01
Errors and accidents, or deviations from standard operating procedures, other policy, or regulations must be documented and reviewed, with corrective actions taken to assure quality performance in a cellular therapy laboratory. Though expectations and guidance for deviation management exist, a description of the framework for the development of such a program is lacking in the literature. Here we describe our deviation management program, which uses a Microsoft Access database and Microsoft Excel to analyze deviations and notable events, facilitating quality assurance (QA) functions and ongoing process improvement. Data is stored in a Microsoft Access database with an assignment to one of six deviation type categories. Deviation events are evaluated for potential impact on patient and product, and impact scores for each are determined using a 0- 4 grading scale. An immediate investigation occurs, and corrective actions are taken to prevent future similar events from taking place. Additionally, deviation data is collectively analyzed on a quarterly basis using Microsoft Excel, to identify recurring events or developing trends. Between January 1, 2001 and December 31, 2001 over 2500 products were processed at our laboratory. During this time period, 335 deviations and notable events occurred, affecting 385 products and/or patients. Deviations within the 'technical error' category were most common (37%). Thirteen percent of deviations had a patient and/or a product impact score > or = 2, a score indicating, at a minimum, potentially affected patient outcome or moderate effect upon product quality. Real-time analysis and quarterly review of deviations using our deviation management program allows for identification and correction of deviations. Monitoring of deviation trends allows for process improvement and overall successful functioning of the QA program in the cell therapy laboratory. Our deviation management program could serve as a model for other laboratories in need of such a program.
Ku-band radar threshold analysis
NASA Technical Reports Server (NTRS)
Weber, C. L.; Polydoros, A.
1979-01-01
The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.
Hart, John
2011-03-01
This study describes a model for statistically analyzing follow-up numeric-based chiropractic spinal assessments for an individual patient based on his or her own baseline. Ten mastoid fossa temperature differential readings (MFTD) obtained from a chiropractic patient were used in the study. The first eight readings served as baseline and were compared to post-adjustment readings. One of the two post-adjustment MFTD readings fell outside two standard deviations of the baseline mean and therefore theoretically represents improvement according to pattern analysis theory. This study showed how standard deviation analysis may be used to identify future outliers for an individual patient based on his or her own baseline data. Copyright © 2011 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Sim, Julius; Lewis, Martyn
2012-03-01
To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.
1980-03-14
failure Sigmar (Or) in line 50, the standard deviation of the relative error of the weights Sigmap (o) in line 60, the standard deviation of the phase...200, the weight structures in the x and y coordinates Q in line 210, the probability of element failure Sigmar (Or) in line 220, the standard...NUMBER OF ELEMENTS =u;2*H 120 PRINT "Pr’obability of elemenit failure al;O 130 PRINT "Standard dtvi&t ion’ oe r.1&tive ýrror of wl; Sigmar 14 0 PRINT
Takarabe, S; Yabuuchi, H; Morishita, J
2012-06-01
To investigate the usefulness of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high- density mammary glands region to a whole mammary glands region as features for classification of mammograms into four categories based on the ACR BI-RADS breast composition. We used 36 digital mediolateral oblique view mammograms (18 patients) approved by our IRB. These images were classified into the four categories of breast compositions by an experienced breast radiologist and the results of the classification were regarded as a gold standard. First, a whole mammary region in a breast was divided into two regions such as a high-density mammary glands region and a low/iso-density mammary glands region by using a threshold value that was obtained from the pixel values corresponding to a pectoral muscle region. Then the percentage of a high-density mammary glands region to a whole mammary glands region was calculated. In addition, as a new method, the standard deviation of pixel values in a whole mammary glands region was calculated as an index based on the intermingling of mammary glands and fats. Finally, all mammograms were classified by using the combination of the percentage of a high-density mammary glands region and the standard deviation of each image. The agreement rates of the classification between our proposed method and gold standard was 86% (31/36). This result signified that our method has the potential to classify mammograms. The combination of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high-density mammary glands region to a whole mammary glands region was available as features to classify mammograms based on the ACR BI- RADS breast composition. © 2012 American Association of Physicists in Medicine.
Manjunatha, B M; Al-Bulushi, S; Pratap, N
2014-04-01
Follicular wave emergence was synchronized by treating camels with GnRH when a dominant follicle (DF) was present in the ovaries. Animals were scanned twice a day from day 0 (day of GnRH treatment) to day 10, to characterize emergence and deviation of follicles during the development of the follicular wave. Follicle deviation in individual animals was determined by graphical method. Single DFs were found in 16, double DFs in 9 and triple DFs in two camels. The incidence of codominant (double and triple DFs) follicles was 41%. The interval from GnRH treatment to wave emergence, wave emergence to deviation, diameter and growth rate of F1 follicle before or after deviation did not differ between the animals with single and double DFs. The size difference between future DF(s) and the largest subordinate follicle (SF) was apparent from the day of wave emergence in single and double DFs. Overall, interval from GnRH treatment to wave emergence and wave emergence to the beginning of follicle deviation was 70.6 ± 1.4 and 58.6 ± 2.7 h, respectively. Mean size of the DF and largest SF at the beginning of deviation was 7.4 ± 0.2 and 6.3 ± 0.1 mm, respectively. In conclusion, the characteristics of follicle deviation are similar between the animals that developed single or double DFs. © 2013 Blackwell Verlag GmbH.
An Evaluation of the Gap Sizes of 3-Unit Fixed Dental Prostheses Milled from Sintering Metal Blocks.
Jung, Jae-Kwan
2017-01-01
This study assessed the clinical acceptability of sintering metal-fabricated 3-unit fixed dental prostheses (FDPs) based on gap sizes. Ten specimens were prepared on research models by milling sintering metal blocks or by the lost-wax technique (LWC group). Gap sizes were assessed at 12 points per abutment (premolar and molar), 24 points per specimen (480 points in a total in 20 specimens). The measured points were categorized as marginal, axial wall, and occlusal for assessment in a silicone replica. The silicone replica was cut through the mesiodistal and buccolingual center. The four sections were magnified at 160x, and the thickness of the light body silicone was measured to determine the gap size, and gap size means were compared. For the premolar part, the mean (standard deviation) gap size was nonsignificantly ( p = 0.139) smaller in the SMB group (68.6 ± 35.6 μ m) than in the LWC group (69.6 ± 16.9 μ m). The mean molar gap was nonsignificantly smaller ( p = 0.852) in the LWC (73.9 ± 25.6 μ m) than in the SMB (78.1 ± 37.4 μ m) group. The gap sizes were similar between the two groups. Because the gap sizes were within the previously proposed clinically accepted limit, FDPs prepared by sintered metal block milling are clinically acceptable.
An Evaluation of the Gap Sizes of 3-Unit Fixed Dental Prostheses Milled from Sintering Metal Blocks
2017-01-01
This study assessed the clinical acceptability of sintering metal-fabricated 3-unit fixed dental prostheses (FDPs) based on gap sizes. Ten specimens were prepared on research models by milling sintering metal blocks or by the lost-wax technique (LWC group). Gap sizes were assessed at 12 points per abutment (premolar and molar), 24 points per specimen (480 points in a total in 20 specimens). The measured points were categorized as marginal, axial wall, and occlusal for assessment in a silicone replica. The silicone replica was cut through the mesiodistal and buccolingual center. The four sections were magnified at 160x, and the thickness of the light body silicone was measured to determine the gap size, and gap size means were compared. For the premolar part, the mean (standard deviation) gap size was nonsignificantly (p = 0.139) smaller in the SMB group (68.6 ± 35.6 μm) than in the LWC group (69.6 ± 16.9 μm). The mean molar gap was nonsignificantly smaller (p = 0.852) in the LWC (73.9 ± 25.6 μm) than in the SMB (78.1 ± 37.4 μm) group. The gap sizes were similar between the two groups. Because the gap sizes were within the previously proposed clinically accepted limit, FDPs prepared by sintered metal block milling are clinically acceptable. PMID:28246605
The effect of microstructure on the performance of Li-ion porous electrodes
NASA Astrophysics Data System (ADS)
Chung, Ding-Wen
By combining X-ray tomography data and computer-generated porous elec- trodes, the impact of microstructure on the energy and power density of lithium-ion batteries is analyzed. Specifically, for commercial LiMn2O4 electrodes, results indi- cate that a broad particle size distribution of active material delivers up to two times higher energy density than monodisperse-sized particles for low discharge rates, and a monodisperse particle size distribution delivers the highest energy and power density for high discharge rates. The limits of traditionally used microstructural properties such as tortuosity, reactive area density, particle surface roughness, morphological anisotropy were tested against degree of particle size polydispersity, thus enabling the identification of improved porous architectures. The effects of critical battery processing parameters, such as layer compaction and carbon black, were also rationalized in the context of electrode performance. While a monodisperse particle size distribution exhibits the lowest possible tortuosity and three times higher surface area per unit volume with respect to an electrode conformed of a polydisperse particle size distribution, a comparable performance can be achieved by polydisperse particle size distributions with degrees of polydispersity less than 0.2 of particle size standard deviation. The use of non-spherical particles raises the tortuosity by as much as three hundred percent, which considerably lowers the power performance. However, favorably aligned particles can maximize power performance, particularly for high discharge rate applications.
40 CFR 63.7951 - What reports must I submit and when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... the information in § 63.10(d)(5)(i). (5) If there were no deviations from any emissions limitations... that there were no deviations from the emissions limitations, work practice standards, or operation and...) For each deviation from an emissions limitation (including an operating limit) that occurs at an...
Xin-Ye, Ni; Ren, Lei; Yan, Hui; Yin, Fang-Fang
2016-12-01
This study aimed to detect the sensitivity of Delt 4 on ordinary field multileaf collimator misalignments, system misalignments, random misalignments, and misalignments caused by gravity of the multileaf collimator in stereotactic body radiation therapy. (1) Two field sizes, including 2.00 cm (X) × 6.00 cm (Y) and 7.00 cm (X) × 6.00 cm (Y), were set. The leaves of X1 and X2 in the multileaf collimator were simultaneously opened. (2) Three cases of stereotactic body radiation therapy of spinal tumor were used. The dose of the planning target volume was 1800 cGy with 3 fractions. The 4 types to be simulated included (1) the leaves of X1 and X2 in the multileaf collimator were simultaneously opened, (2) only X1 of the multileaf collimator and the unilateral leaf were opened, (3) the leaves of X1 and X2 in the multileaf collimator were randomly opened, and (4) gravity effect was simulated. The leaves of X1 and X2 in the multileaf collimator shifted to the same direction. The difference between the corresponding 3-dimensional dose distribution measured by Delt 4 and the dose distribution in the original plan made in the treatment planning system was analyzed with γ index criteria of 3.0 mm/3.0%, 2.5 mm/2.5%, 2.0 mm/2.0%, 2.5 mm/1.5%, and 1.0 mm/1.0%. (1) In the field size of 2.00 cm (X) × 6.00 cm (Y), the γ pass rate of the original was 100% with 2.5 mm/2.5% as the statistical standard. The pass rate decreased to 95.9% and 89.4% when the X1 and X2 directions of the multileaf collimator were opened within 0.3 and 0.5 mm, respectively. In the field size of 7.00 (X) cm × 6.00 (Y) cm with 1.5 mm/1.5% as the statistical standard, the pass rate of the original was 96.5%. After X1 and X2 of the multileaf collimator were opened within 0.3 mm, the pass rate decreased to lower than 95%. The pass rate was higher than 90% within the 3 mm opening. (2) For spinal tumor, the change in the planning target volume V 18 under various modes calculated using treatment planning system was within 1%. However, the maximum dose deviation of the spinal cord was high. In the spinal cord with a gravity of -0.25 mm, the maximum dose deviation minimally changed and increased by 6.8% than that of the original. In the largest opening of 1.00 mm, the deviation increased by 47.7% than that of the original. Moreover, the pass rate of the original determined through Delt 4 was 100% with 3 mm/3% as the statistical standard. The pass rate was 97.5% in the 0.25 mm opening and higher than 95% in the 0.5 mm opening A, 0.25 mm opening A, whole gravity series, and 0.20 mm random opening. Moreover, the pass rate was higher than 90% with 2.0 mm/2.0% as the statistical standard in the original and in the 0.25 mm gravity. The difference in the pass rates was not statistically significant among the -0.25 mm gravity, 0.25 mm opening A, 0.20 mm random opening, and original as calculated using SPSS 11.0 software with P > .05. Different analysis standards of Delt 4 were analyzed in different field sizes to improve the detection sensitivity of the multileaf collimator position on the basis of 90% throughout rate. In stereotactic body radiation therapy of spinal tumor, the 2.0 mm/2.0% standard can reveal the dosimetric differences caused by the minor multileaf collimator position compared with the 3.0 mm/3.0% statistical standard. However, some position derivations of the misalignments that caused high dose amount to the spinal cord cannot be detected. However, some misalignments were not detected when a large number of multileaf collimator were administered into the spinal cord. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Baasch, Benjamin; Müller, Hendrik; von Dobeneck, Tilo; Oberle, Ferdinand K. J.
2017-05-01
The electric conductivity and magnetic susceptibility of sediments are fundamental parameters in environmental geophysics. Both can be derived from marine electromagnetic profiling, a novel, fast and non-invasive seafloor mapping technique. Here we present statistical evidence that electric conductivity and magnetic susceptibility can help to determine physical grain-size characteristics (size, sorting and mud content) of marine surficial sediments. Electromagnetic data acquired with the bottom-towed electromagnetic profiler MARUM NERIDIS III were analysed and compared with grain size data from 33 samples across the NW Iberian continental shelf. A negative correlation between mean grain size and conductivity (R=-0.79) as well as mean grain size and susceptibility (R=-0.78) was found. Simple and multiple linear regression analyses were carried out to predict mean grain size, mud content and the standard deviation of the grain-size distribution from conductivity and susceptibility. The comparison of both methods showed that multiple linear regression models predict the grain-size distribution characteristics better than the simple models. This exemplary study demonstrates that electromagnetic benthic profiling is capable to estimate mean grain size, sorting and mud content of marine surficial sediments at a very high significance level. Transfer functions can be calibrated using grains-size data from a few reference samples and extrapolated along shelf-wide survey lines. This study suggests that electromagnetic benthic profiling should play a larger role for coastal zone management, seafloor contamination and sediment provenance studies in worldwide continental shelf systems.
Zhang, You; Yin, Fang-Fang; Ren, Lei
2015-08-01
Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, N; DiCostanzo, D; Fullenkamp, M
2015-06-15
Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less
The pathway to RCTs: how many roads are there? Examining the homogeneity of RCT justification.
Chow, Jeffrey Tin Yu; Lam, Kevin; Naeem, Abdul; Akanda, Zarique Z; Si, Francie Fengqin; Hodge, William
2017-02-02
Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results help inform decisions concerning drug development and use, preventive therapies, and screening programs. However, the inputs that justify an RCT to be conducted have not been studied. We reviewed the MEDLINE and EMBASE databases across six specialties (Ophthalmology, Otorhinolaryngology (ENT), General Surgery, Psychiatry, Obstetrics-Gynecology (OB-GYN), and Internal Medicine) and randomly chose 25 RCTs from each specialty except for Otorhinolaryngology (20 studies) and Internal Medicine (28 studies). For each RCT, we recorded information relating to the justification for conducting RCTs such as average study size cited, number of studies cited, and types of studies cited. The justification varied widely both within and between specialties. For Ophthalmology and OB-GYN, the average study sizes cited were around 1100 patients, whereas they were around 500 patients for Psychiatry and General Surgery. Between specialties, the average number of studies cited ranged from around 4.5 for ENT to around 10 for Ophthalmology, but the standard deviations were large, indicating that there was even more discrepancy within each specialty. When standardizing by the sample size of the RCT, some of the discrepancies between and within specialties can be explained, but not all. On average, Ophthalmology papers cited review articles the most (2.96 studies per RCT) compared to less than 1.5 studies per RCT for all other specialties. The justifications for RCTs vary widely both within and between specialties, and the justification for conducting RCTs is not standardized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
Here, a search for the associated production of the Higgs boson with a top quark pair (more » $$t\\bar{t}$$H) is reported. The search is performed in multilepton final states using a data set corresponding to an integrated luminosity of 36.1 fb -1 of proton-proton collision data recorded by the ATLAS experiment at a center-of-mass energy $$\\sqrt{s}$$ = 13 TeV at the Large Hadron Collider. Higgs boson decays to WW*, ττ, and ZZ* are targeted. Seven final states, categorized by the number and flavor of charged-lepton candidates, are examined for the presence of the Standard Model Higgs boson with a mass of 125 GeV and a pair of top quarks. An excess of events over the expected background from Standard Model processes is found with an observed significance of 4.1 standard deviations, compared to an expectation of 2.8 standard deviations. The best fit for the $$t\\bar{t}$$H production cross section is σ($$t\\bar{t}$$H) = $${790}_{-210}^{+230}$$ fb, in agreement with the Standard Model prediction of $${507}_{-50}^{+35}$$ fb. The combination of this result with other $$t\\bar{t}$$H searches from the ATLAS experiment using the Higgs boson decay modes to $$b\\bar{b}$$, γγ and ZZ* → 4ℓ, has an observed significance of 4.2 standard deviations, compared to an expectation of 3.8 standard deviations. This provides evidence for the $$t\\bar{t}$$H production mode.« less
Aaboud, M.; Aad, G.; Abbott, B.; ...
2018-04-09
Here, a search for the associated production of the Higgs boson with a top quark pair (more » $$t\\bar{t}$$H) is reported. The search is performed in multilepton final states using a data set corresponding to an integrated luminosity of 36.1 fb -1 of proton-proton collision data recorded by the ATLAS experiment at a center-of-mass energy $$\\sqrt{s}$$ = 13 TeV at the Large Hadron Collider. Higgs boson decays to WW*, ττ, and ZZ* are targeted. Seven final states, categorized by the number and flavor of charged-lepton candidates, are examined for the presence of the Standard Model Higgs boson with a mass of 125 GeV and a pair of top quarks. An excess of events over the expected background from Standard Model processes is found with an observed significance of 4.1 standard deviations, compared to an expectation of 2.8 standard deviations. The best fit for the $$t\\bar{t}$$H production cross section is σ($$t\\bar{t}$$H) = $${790}_{-210}^{+230}$$ fb, in agreement with the Standard Model prediction of $${507}_{-50}^{+35}$$ fb. The combination of this result with other $$t\\bar{t}$$H searches from the ATLAS experiment using the Higgs boson decay modes to $$b\\bar{b}$$, γγ and ZZ* → 4ℓ, has an observed significance of 4.2 standard deviations, compared to an expectation of 3.8 standard deviations. This provides evidence for the $$t\\bar{t}$$H production mode.« less
NASA Astrophysics Data System (ADS)
Aaboud, M.; Aad, G.; Abbott, B.; Abdinov, O.; Abeloos, B.; Abidi, S. H.; Abouzeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abulaiti, Y.; Acharya, B. S.; Adachi, S.; Adamczyk, L.; Adelman, J.; Adersberger, M.; Adye, T.; Affolder, A. A.; Afik, Y.; Agheorghiesei, C.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akatsuka, S.; Åkesson, T. P. A.; Akilli, E.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albicocco, P.; Alconada Verzini, M. J.; Alderweireldt, S. C.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allaire, C.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alshehri, A. A.; Alstaty, M. I.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amaral Coutinho, Y.; Ambroz, L.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amoroso, S.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Angerami, A.; Anisenkov, A. V.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Antrim, D. J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Araujo Ferraz, V.; Arce, A. T. H.; Ardell, R. E.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkin, R. J.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Avramidou, R.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Bagnaia, P.; Bahmani, M.; Bahrasemani, H.; Baines, J. T.; Bajic, M.; Baker, O. K.; Bakker, P. J.; Bakshi Gupta, D.; Baldin, E. M.; Balek, P.; Balli, F.; Balunas, W. K.; Banas, E.; Bandyopadhyay, A.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barkeloo, J. T.; Barklow, T.; Barlow, N.; Barnea, R.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska-Blenessy, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bauer, K. T.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Beck, H. C.; Becker, K.; Becker, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beermann, T. A.; Begalli, M.; Begel, M.; Behera, A.; Behr, J. K.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Bergsten, L. J.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernardi, G.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertram, I. A.; Bertsche, C.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Bethani, A.; Bethke, S.; Betti, A.; Bevan, A. J.; Beyer, J.; Bianchi, R. M.; Biebel, O.; Biedermann, D.; Bielski, R.; Bierwagen, K.; Biesuz, N. V.; Biglietti, M.; Billoud, T. R. V.; Bindi, M.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bittrich, C.; Bjergaard, D. M.; Black, J. E.; Black, K. M.; Blair, R. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blue, A.; Blumenschein, U.; Blunier, Dr.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boerner, D.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bolz, A. E.; Bomben, M.; Bona, M.; Bonilla, J. S.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozson, A. J.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Braren, F.; Bratzler, U.; Brau, B.; Brau, J. E.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Briglin, D. L.; Bristow, T. M.; Britton, D.; Britzger, D.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruni, A.; Bruni, G.; Bruni, L. S.; Bruno, S.; Brunt, Bh; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burch, T. J.; Burdin, S.; Burgard, C. D.; Burger, A. M.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Büscher, D.; Büscher, V.; Buschmann, E.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabras, G.; Cabrera Urbán, S.; Caforio, D.; Cai, H.; Cairo, V. M. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Cano Bret, M.; Cantero, J.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carlson, B. T.; Carminati, L.; Carney, R. M. D.; Caron, S.; Carquin, E.; Carrá, S.; Carrillo-Montoya, G. D.; Casadei, D.; Casado, M. P.; Casha, A. F.; Casolino, M.; Casper, D. W.; Castelijn, R.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Celebi, E.; Ceradini, F.; Cerda Alberich, L.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, W. S.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, C.; Chen, H.; Chen, J.; Chen, J.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Cheu, E.; Cheung, K.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chiu, Y. H.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, Y. S.; Christodoulou, V.; Chu, M. C.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Cinca, D.; Cindro, V.; Cioarǎ, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Clark, A.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooper-Sarkar, A. M.; Cormier, F.; Cormier, K. J. R.; Corradi, M.; Corrigan, E. E.; Corriveau, F.; Cortes-Gonzalez, A.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Creager, R. A.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cukierman, A. R.; Cummings, J.; Curatolo, M.; Cúth, J.; Czekierda, S.; Czodrowski, P.; D'Amen, G.; D'Auria, S.; D'Eramo, L.; D'Onofrio, M.; da Cunha Sargedas de Sousa, M. J.; da Via, C.; Dabrowski, W.; Dado, T.; Dahbi, S.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Daneri, M. F.; Dang, N. P.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dattagupta, A.; Daubney, T.; Davey, W.; David, C.; Davidek, T.; Davis, D. R.; Davison, P.; Dawe, E.; Dawson, I.; de, K.; de Asmundis, R.; de Benedetti, A.; de Castro, S.; de Cecco, S.; de Groot, N.; de Jong, P.; de la Torre, H.; de Lorenzi, F.; de Maria, A.; de Pedis, D.; de Salvo, A.; de Sanctis, U.; de Santo, A.; de Vasconcelos Corga, K.; de Vivie de Regie, J. B.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; Della Volpe, D.; Delmastro, M.; Delporte, C.; Delsart, P. A.; Demarco, D. A.; Demers, S.; Demichev, M.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Devesa, M. R.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; di Bello, F. A.; di Ciaccio, A.; di Ciaccio, L.; di Clemente, W. K.; di Donato, C.; di Girolamo, A.; di Micco, B.; di Nardo, R.; di Petrillo, K. F.; di Simone, A.; di Sipio, R.; di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Dickinson, J.; Diehl, E. B.; Dietrich, J.; Díez Cornell, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; Do Vale, M. A. B.; Dobre, M.; Dodsworth, D.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dreyer, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Dubinin, F.; Dubreuil, A.; Duchovni, E.; Duckeck, G.; Ducourthial, A.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dulsen, C.; Dumancic, M.; Dumitriu, A. E.; Duncan, A. K.; Dunford, M.; Duperrin, A.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Duvnjak, D.; Dyndal, M.; Dziedzic, B. S.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; El Kosseifi, R.; Ellajosyula, V.; Ellert, M.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Ennis, J. S.; Epland, M. B.; Erdmann, J.; Ereditato, A.; Errede, S.; Escalier, M.; Escobar, C.; Esposito, B.; Estrada Pastor, O.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Ezzi, M.; Fabbri, F.; Fabbri, L.; Fabiani, V.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feickert, M.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, M.; Fenton, M. J.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Fiedler, F.; Filipčič, A.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, R. R. M.; Flick, T.; Flierl, B. M.; Flores, L. M.; Flores Castillo, L. R.; Fomin, N.; Forcolin, G. T.; Formica, A.; Förster, F. A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Freund, B.; Freund, W. S.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fusayasu, T.; Fuster, J.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Ganguly, S.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; García Pascual, J. A.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gasnikova, K.; Gaudiello, A.; Gaudio, G.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gee, C. N. P.; Geisen, J.; Geisen, M.; Geisler, M. P.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Geßner, G.; Ghasemi, S.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giangiacomi, N.; Giannetti, P.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giordani, M. P.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugliarelli, G.; Giugni, D.; Giuli, F.; Giulini, M.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gkountoumis, P.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Gama, R.; Gonella, G.; Gonella, L.; Gongadze, A.; Gonnella, F.; Gonski, J. L.; González de La Hoz, S.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorini, B.; Gorini, E.; Gorišek, A.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gottardo, C. A.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Goy, C.; Gozani, E.; Grabowska-Bold, I.; Gradin, P. O. J.; Graham, E. C.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, C.; Gray, H. M.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Grummer, A.; Guan, L.; Guan, W.; Guenther, J.; Guerguichon, A.; Guescini, F.; Guest, D.; Gueta, O.; Gugel, R.; Gui, B.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, W.; Guo, Y.; Gupta, R.; Gurbuz, S.; Gustavino, G.; Gutelman, B. J.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Guzik, M. P.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hagihara, M.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Han, K.; Han, L.; Han, S.; Hanagaki, K.; Hance, M.; Handl, D. M.; Haney, B.; Hankache, R.; Hanke, P.; Hansen, E.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Harkusha, S.; Harrison, P. F.; Hartmann, N. M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havener, L. B.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heer, S.; Heidegger, K. K.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Held, A.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Herbert, G. H.; Herde, H.; Herget, V.; Hernández Jiménez, Y.; Herr, H.; Herten, G.; Hertenberger, R.; Hervas, L.; Herwig, T. C.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Higashino, S.; Higón-Rodriguez, E.; Hildebrand, K.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hils, M.; Hinchliffe, I.; Hirose, M.; Hirschbuehl, D.; Hiti, B.; Hladik, O.; Hlaluku, D. R.; Hoad, X.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Hohov, D.; Holmes, T. R.; Holzbock, M.; Homann, M.; Honda, S.; Honda, T.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Horyn, L. A.; Hostachy, J.-Y.; Hostiuc, A.; Hou, S.; Hoummada, A.; Howarth, J.; Hoya, J.; Hrabovsky, M.; Hrdinka, J.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, P. J.; Hsu, S.-C.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Huhtinen, M.; Hunter, R. F. H.; Huo, P.; Hupe, A. M.; Huseynov, N.; Huston, J.; Huth, J.; Hyneman, R.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iguchi, R.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Iliadis, D.; Ilic, N.; Iltzsche, F.; Introzzi, G.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Isacson, M. F.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Issever, C.; Istin, S.; Ito, F.; Iturbe Ponce, J. M.; Iuppa, R.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, P.; Jacobs, R. M.; Jain, V.; Jakel, G.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansky, R.; Janssen, J.; Janus, M.; Janus, P. A.; Jarlskog, G.; Javadov, N.; Javå¯Rek, T.; Javurkova, M.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jelinskas, A.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiang, Z.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Jivan, H.; Johansson, P.; Johns, K. A.; Johnson, C. A.; Johnson, W. J.; Jon-And, K.; Jones, R. W. L.; Jones, S. D.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Junggeburth, J. J.; Juste Rozas, A.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanjir, L.; Kano, Y.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karpov, S. N.; Karpova, Z. M.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kay, E. F.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kellermann, E.; Kempster, J. J.; Kendrick, J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-Zada, F.; Khanov, A.; Kharlamov, A. G.; Kharlamova, T.; Khodinov, A.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kiehn, M.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; Kirchmeier, D.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitali, V.; Kivernyk, O.; Kladiva, E.; Klapdor-Kleingrothaus, T.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klingl, T.; Klioutchnikova, T.; Klitzner, F. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Köhler, N. M.; Koi, T.; Kolb, M.; Koletsou, I.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Konya, B.; Kopeliansky, R.; Koperny, S.; Korcyl, K.; Kordas, K.; Korn, A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Koulouris, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kourlitis, E.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Krauss, D.; Kremer, J. A.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, M. C.; Kubota, T.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kulinich, Y. P.; Kuna, M.; Kunigo, T.; Kupco, A.; Kupfer, T.; Kuprash, O.; Kurashige, H.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kurth, M. G.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; La Ruffa, F.; Lacasta, C.; Lacava, F.; Lacey, J.; Lack, D. P. J.; Lacker, H.; Lacour, D.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Langenberg, R. J.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Lapertosa, A.; Laplace, S.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Lau, T. S.; Laudrain, A.; Law, A. T.; Laycock, P.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; Leblanc, M.; Lecompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, G. R.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehmann Miotto, G.; Leight, W. A.; Leisos, A.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Lerner, G.; Leroy, C.; Les, R.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Li, B.; Li, C.-Q.; Li, H.; Li, L.; Li, Q.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lie, K.; Limosani, A.; Lin, C. Y.; Lin, K.; Lin, S. C.; Lin, T. H.; Linck, R. A.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, H.; Liu, H.; Liu, J. K. K.; Liu, J. B.; Liu, K.; Liu, M.; Liu, P.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo, C. Y.; Lo Sterzo, F.; Lobodzinska, E. M.; Loch, P.; Loebinger, F. K.; Loesle, A.; Loew, K. M.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopez, J. A.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lu, Y. J.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lund-Jensen, B.; Lutz, M. S.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyu, F.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; MacDonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Mader, W. F.; Madsen, A.; Madysa, N.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A. S.; Magerl, V.; Maidantchik, C.; Maier, T.; Maio, A.; Majersky, O.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandić, I.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mankinen, K. H.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Marceca, G.; March, L.; Marchese, L.; Marchiori, G.; Marcisovsky, M.; Marin Tobon, C. A.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marshall, Z.; Martensson, M. U. F.; Marti-Garcia, S.; Martin, C. B.; Martin, T. A.; Martin, V. J.; Martin Dit Latour, B.; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Mason, L. H.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Maznas, I.; Mazza, S. M.; Mc Fadden, N. C.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; McFayden, J. A.; McHedlidze, G.; McKay, M. A.; McMahon, S. J.; McNamara, P. C.; McNicol, C. J.; McPherson, R. A.; Meadows, Z. A.; Meehan, S.; Megy, T. J.; Mehlhase, S.; Mehta, A.; Meideck, T.; Meier, K.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Mellenthin, J. D.; Melo, M.; Meloni, F.; Melzer, A.; Menary, S. B.; Meng, L.; Meng, X. T.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Merlassino, C.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Millar, D. A.; Miller, D. W.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Minegishi, Y.; Ming, Y.; Mir, L. M.; Mirto, A.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mizukami, A.; Mjörnmark, J. U.; Mkrtchyan, T.; Mlynarikova, M.; Moa, T.; Mochizuki, K.; Mogg, P.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, M.; Morgenstern, S.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moschovakos, P.; Mosidze, M.; Moss, H. J.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Moyse, E. J. W.; Muanza, S.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murin, P.; Murray, W. J.; Murrone, A.; Muškinja, M.; Mwewa, C.; Myagkov, A. G.; Myers, J.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, M. E.; Nemecek, S.; Nemethy, P.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Newman, P. R.; Ng, T. Y.; Ng, Y. S.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforou, N.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishu, N.; Nisius, R.; Nitsche, I.; Nitta, T.; Nobe, T.; Noguchi, Y.; Nomachi, M.; Nomidis, I.; Nomura, M. A.; Nooney, T.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Novotny, R.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'Connor, K.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Ojeda, M. L.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Oliver, J. L.; Olsson, M. J. R.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oppen, H.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orgill, E. C.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero Y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagan Griso, S.; Paganini, M.; Paige, F.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; Panagiotopoulou, E. St.; Panagoulias, I.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parida, B.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasner, J. M.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearson, B.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Peri, F.; Perini, L.; Pernegger, H.; Perrella, S.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Pham, T.; Phillips, F. H.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pinamonti, M.; Pinfold, J. L.; Pitt, M.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Pluth, D.; Podberezko, P.; Poettgen, R.; Poggi, R.; Poggioli, L.; Pogrebnyak, I.; Pohl, D.; Pokharel, I.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Ponomarenko, D.; Pontecorvo, L.; Popeneciu, G. A.; Portillo Quintero, D. M.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potti, H.; Poulsen, T.; Poveda, J.; Pozo Astigarraga, M. E.; Pralavorio, P.; Prell, S.; Price, D.; Primavera, M.; Prince, S.; Proklova, N.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puri, A.; Puzo, P.; Qian, J.; Qin, Y.; Quadt, A.; Queitsch-Maitland, M.; Qureshi, A.; Radeka, V.; Radhakrishnan, S. K.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rashid, T.; Raspopov, S.; Ratti, M. G.; Rauch, D. M.; Rauscher, F.; Rave, S.; Ravinovich, I.; Rawling, J. H.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reed, R. G.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reiss, A.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Resseguie, E. D.; Rettie, S.; Reynolds, E.; Rezanova, O. L.; Reznicek, P.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ripellino, G.; Ristić, B.; Ritsch, E.; Riu, I.; Rivera Vergara, J. C.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Roberts, R. T.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocco, E.; Roda, C.; Rodina, Y.; Rodriguez Bosca, S.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Rodríguez Vera, A. M.; Roe, S.; Rogan, C. S.; Røhne, O.; Röhrig, R.; Roloff, J.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Rosati, S.; Rosbach, K.; Rose, P.; Rosien, N.-A.; Rossi, E.; Rossi, L. P.; Rossini, L.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Roy, D.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Rüttinger, E. M.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, M.; Saito, T.; Sakamoto, H.; Salamanna, G.; Salazar Loyola, J. E.; Salek, D.; Sales de Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sampsonidou, D.; Sánchez, J.; Sanchez Pineda, A.; Sandaker, H.; Sander, C. O.; Sandhoff, M.; Sandoval, C.; Sankey, D. P. C.; Sannino, M.; Sano, Y.; Sansoni, A.; Santoni, C.; Santos, H.; Santoyo Castillo, I.; Sapronov, A.; Saraiva, J. G.; Sasaki, O.; Sato, K.; Sauvan, E.; Savard, P.; Savic, N.; Sawada, R.; Sawyer, C.; Sawyer, L.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, L.; Schaeffer, J.; Schaepe, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schegelsky, V. A.; Scheirich, D.; Schenck, F.; Schernau, M.; Schiavi, C.; Schier, S.; Schildgen, L. K.; Schillaci, Z. M.; Schillo, C.; Schioppa, E. J.; Schioppa, M.; Schleicher, K. E.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schopf, E.; Schott, M.; Schouwenberg, J. F. P.; Schovancova, J.; Schramm, S.; Schuh, N.; Schulte, A.; Schultz-Coulon, H.-C.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Sciandra, A.; Sciolla, G.; Scornajenghi, M.; Scuri, F.; Scutti, F.; Scyboz, L. M.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Semprini-Cesari, N.; Senkin, S.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Severini, H.; Šfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shahinian, J. D.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Sharma, A. S.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Shen, Y.; Sherafati, N.; Sherman, A. D.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shipsey, I. P. J.; Shirabe, S.; Shiyakova, M.; Shlomi, J.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shope, D. R.; Shrestha, S.; Shulga, E.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sideras Haddad, E.; Sidiropoulou, O.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silva, M.; Silverstein, S. B.; Simic, L.; Simion, S.; Simioni, E.; Simmons, B.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Siral, I.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smiesko, J.; Smirnov, N.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, J. W.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, I. M.; Snyder, S.; Sobie, R.; Socher, F.; Soffa, A. M.; Soffer, A.; Søgaard, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, W.; Sopczak, A.; Sopkova, F.; Sosa, D.; Sotiropoulou, C. L.; Sottocornola, S.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spieker, T. M.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanitzki, M. M.; Stapf, B. S.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Stark, S. H.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Stegler, M.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, T. J.; Stewart, G. A.; Stockton, M. C.; Stoicea, G.; Stolte, P.; Stonjek, S.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultan, D. M. S.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Suruliz, K.; Suster, C. J. E.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Swift, S. P.; Sydorenko, A.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Tahirovic, E.; Taiblum, N.; Takai, H.; Takashima, R.; Takasugi, E. H.; Takeda, K.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanioka, R.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarek Abouelfadl Mohamed, A. T.; Tarem, S.; Tarna, G.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, A. J.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teixeira-Dias, P.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Thais, S. J.; Theveneaux-Pelzer, T.; Thiele, F.; Thomas, J. P.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Tian, Y.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorova-Nova, S.; Todt, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Tornambe, P.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Treado, C. J.; Trefzger, T.; Tresoldi, F.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsang, K. W.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tulbure, T. T.; Tuna, A. N.; Turchikhin, S.; Turgeman, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Uno, K.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usui, J.; Vacavant, L.; Vacek, V.; Vachon, B.; Vadla, K. O. H.; Vaidya, A.; Valderanis, C.; Valdes Santurio, E.; Valente, M.; Valentinetti, S.; Valero, A.; Valéry, L.; Vallier, A.; Valls Ferrer, J. A.; van den Wollenberg, W.; van der Graaf, H.; van Gemmeren, P.; van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vari, R.; Varnes, E. W.; Varni, C.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vasquez, G. A.; Vazeille, F.; Vazquez Furelos, D.; Vazquez Schroeder, T.; Veatch, J.; Vecchio, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, A. T.; Vermeulen, J. C.; Vetterli, M. C.; Viaux Maira, N.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vishwakarma, A.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vogel, M.; Vokac, P.; Volpi, G.; von Buddenbrock, S. E.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Wagner, W.; Wagner-Kuhr, J.; Wahlberg, H.; Wahrmund, S.; Wakamiya, K.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, A. M.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, Q.; Wang, R.-J.; Wang, R.; Wang, S. M.; Wang, T.; Wang, W.; Wang, W.; Wang, Z.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, A. F.; Webb, S.; Weber, M. S.; Weber, S. M.; Weber, S. A.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weirich, M.; Weiser, C.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M. D.; Werner, P.; Wessels, M.; Weston, T. D.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A. S.; White, A.; White, M. J.; White, R.; Whiteson, D.; Whitmore, B. W.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winkels, E.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wobisch, M.; Wolf, A.; Wolf, T. M. H.; Wolff, R.; Wolter, M. W.; Wolters, H.; Wong, V. W. S.; Woods, N. L.; Worm, S. D.; Wosiek, B. K.; Wozniak, K. W.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xi, Z.; Xia, L.; Xu, D.; Xu, L.; Xu, T.; Xu, W.; Yabsley, B.; Yacoob, S.; Yajima, K.; Yallup, D. P.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamanaka, T.; Yamane, F.; Yamatani, M.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, S.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yigitbasi, E.; Yildirim, E.; Yorita, K.; Yoshihara, K.; Young, C.; Young, C. J. S.; Yu, J.; Yu, J.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zacharis, G.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zambito, S.; Zanzi, D.; Zeitnitz, C.; Zemaityte, G.; Zeng, J. C.; Zeng, Q.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, L.; Zhang, M.; Zhang, P.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Y.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, M.; Zhou, M.; Zhou, N.; Zhou, Y.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zhulanov, V.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zou, R.; Zur Nedden, M.; Zwalinski, L.; Atlas Collaboration
2018-04-01
A search for the associated production of the Higgs boson with a top quark pair (t t ¯H ) is reported. The search is performed in multilepton final states using a data set corresponding to an integrated luminosity of 36.1 fb-1 of proton-proton collision data recorded by the ATLAS experiment at a center-of-mass energy √{s }=13 TeV at the Large Hadron Collider. Higgs boson decays to W W*, τ τ , and Z Z* are targeted. Seven final states, categorized by the number and flavor of charged-lepton candidates, are examined for the presence of the Standard Model Higgs boson with a mass of 125 GeV and a pair of top quarks. An excess of events over the expected background from Standard Model processes is found with an observed significance of 4.1 standard deviations, compared to an expectation of 2.8 standard deviations. The best fit for the t t ¯H production cross section is σ (t t ¯H )=79 0-210+230 fb , in agreement with the Standard Model prediction of 50 7-50+35 fb . The combination of this result with other t t ¯H searches from the ATLAS experiment using the Higgs boson decay modes to b b ¯, γ γ and Z Z*→4 ℓ, has an observed significance of 4.2 standard deviations, compared to an expectation of 3.8 standard deviations. This provides evidence for the t t ¯H production mode.
Neville, Helen J.; Stevens, Courtney; Pakulak, Eric; Bell, Theodore A.; Fanning, Jessica; Klein, Scott; Isbell, Elif
2013-01-01
Using information from research on the neuroplasticity of selective attention and on the central role of successful parenting in child development, we developed and rigorously assessed a family-based training program designed to improve brain systems for selective attention in preschool children. One hundred forty-one lower socioeconomic status preschoolers enrolled in a Head Start program were randomly assigned to the training program, Head Start alone, or an active control group. Electrophysiological measures of children’s brain functions supporting selective attention, standardized measures of cognition, and parent-reported child behaviors all favored children in the treatment program relative to both control groups. Positive changes were also observed in the parents themselves. Effect sizes ranged from one-quarter to half of a standard deviation. These results lend impetus to the further development and broader implementation of evidence-based education programs that target at-risk families. PMID:23818591
The study of trace metal absoption using stable isotopes and mass spectrometry
NASA Astrophysics Data System (ADS)
Fennessey, P. V.; Lloyd-Kindstrand, L.; Hambidge, K. M.
1991-12-01
The absorption and excretion of zinc stable isotopes have been followed in more than 120 human subjects. The isotope enrichment determinations were made using a standard VG 7070E HF mass spectrometer. A fast atom gun (FAB) was used to form the ions from a dry residue on a pure silver probe tip. Isotope ratio measurements were found to have a precision of better than 2% (relative standard deviation) and required a sample size of 1-5 [mu]g. The average true absorption of zinc was found to be 73 ± 12% (2[sigma]) when the metal was taken in a fasting state. This absorption figure was corrected for tracer that had been absorbed and secreted into the gastrointestinal (GI) tract over the time course of the study. The average time for a majority of the stable isotope tracer to pass through the GI tract was 4.7 ± 1.9 (2[sigma]) days.
Modeling of skin cancer dermatoscopy images
NASA Astrophysics Data System (ADS)
Iralieva, Malica B.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.
2018-04-01
An early identified cancer is more likely to effective respond to treatment and has a less expensive treatment as well. Dermatoscopy is one of general diagnostic techniques for skin cancer early detection that allows us in vivo evaluation of colors and microstructures on skin lesions. Digital phantoms with known properties are required during new instrument developing to compare sample's features with data from the instrument. An algorithm for image modeling of skin cancer is proposed in the paper. Steps of the algorithm include setting shape, texture generation, adding texture and normal skin background setting. The Gaussian represents the shape, and then the texture generation based on a fractal noise algorithm is responsible for spatial chromophores distributions, while the colormap applied to the values corresponds to spectral properties. Finally, a normal skin image simulated by mixed Monte Carlo method using a special online tool is added as a background. Varying of Asymmetry, Borders, Colors and Diameter settings is shown to be fully matched to the ABCD clinical recognition algorithm. The asymmetry is specified by setting different standard deviation values of Gaussian in different parts of image. The noise amplitude is increased to set the irregular borders score. Standard deviation is changed to determine size of the lesion. Colors are set by colormap changing. The algorithm for simulating different structural elements is required to match with others recognition algorithms.
Wootton, Landon; Kudchadker, Rajat; Lee, Andrew; Beddar, Sam
2014-01-01
We designed and constructed an in vivo dosimetry system using plastic scintillation detectors (PSDs) to monitor dose to the rectal wall in patients undergoing intensity-modulated radiation therapy for prostate cancer. Five patients were enrolled in an Institutional Review Board–approved protocol for twice weekly in vivo dose monitoring with our system, resulting in a total of 142 in vivo dose measurements. PSDs were attached to the surface of endorectal balloons used for prostate immobilization to place the PSDs in contact with the rectal wall. Absorbed dose was measured in real time and the total measured dose was compared with the dose calculated by the treatment planning system on the daily CT image dataset. The mean difference between measured and calculated doses for the entire patient population was −0.4% (standard deviation 2.8%). The mean difference between daily measured and calculated doses for each patient ranged from −3.3% to 3.3% (standard deviation ranged from 5.6% to 7.1% for 4 patients and was 14.0% for the last, for whom optimal positioning of the detector was difficult owing to the patient’s large size). Patients tolerated the detectors well and the treatment workflow was not compromised. Overall, PSDs performed well as in vivo dosimeters, providing excellent accuracy, real-time measurement, and reusability. PMID:24434775
Cathcart, Nicole; Kitaev, Vladimir
2011-09-27
Silver nanoprisms of a predominantly hexagonal shape have been prepared using a ligand combination of a strongly binding thiol, captopril, and charge-stabilizing citrate together with hydrogen peroxide as an oxidative etching agent and a strong base that triggered nanoprism formation. The role of the reagents and their interplay in the nanoprism synthesis is discussed in detail. The beneficial role of chloride ions to attain a high degree of reproducibility and monodispersity of the nanoprisms is elucidated. Control over the nanoprism width, thickness, and, consequently, plasmon resonance in the system has been demonstrated. One of the crucial factors in the nanoprism synthesis was the slow, controlled aggregation of thiolate-stabilized silver nanoclusters as the intermediates. The resulting superior monodispersity (better than ca. 10% standard deviation in lateral size and ca. 15% standard deviation in thickness (<1 nm variation)) and charge stabilization of the produced silver nanoprisms enabled the exploration of the rich diversity of the self-assembled morphologies in the system. Regular columnar assemblies of the self-assembled nanoprisms spanning 2-3 μm in length have been observed. Notably, the helicity of the columnar phases was evident, which can be attributed to the chirality of the strongly binding thiol ligand. Finally, the enhancement of Raman scattering has been observed after oxidative removal of thiolate ligands from the AgNPR surface. © 2011 American Chemical Society
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai
2017-04-01
Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.
NASA Astrophysics Data System (ADS)
Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao
2018-03-01
Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.
Evaluation of artifacts generated by zirconium implants in cone-beam computed tomography images.
Vasconcelos, Taruska Ventorini; Bechara, Boulos B; McMahan, Clyde Alex; Freitas, Deborah Queiroz; Noujeim, Marcel
2017-02-01
To evaluate zirconium implant artifact production in cone beam computed tomography images obtained with different protocols. One zirconium implant was inserted in an edentulous mandible. Twenty scans were acquired with a ProMax 3D unit (Planmeca Oy, Helsinki, Finland), with acquisition settings ranging from 70 to 90 peak kilovoltage (kVp) and voxel sizes of 0.32 and 0.16 mm. A metal artifact reduction (MAR) tool was activated in half of the scans. An axial slice through the middle region of the implant was selected for each dataset. Gray values (mean ± standard deviation) were measured in two regions of interest, one close to and the other distant from the implant (control area). The contrast-to-noise ratio was also calculated. Standard deviation decreased with greater kVp and when the MAR tool was used. The contrast-to-noise ratio was significantly higher when the MAR tool was turned off, except for low resolution with kVp values above 80. Selection of the MAR tool and greater kVp resulted in an overall reduction of artifacts in images acquired with low resolution. Although zirconium implants do produce image artifacts in cone-bean computed tomography scans, the setting that best controlled artifact generation by zirconium implants was 90 kVp at low resolution and with the MAR tool turned on. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wootton, Landon; Kudchadker, Rajat; Lee, Andrew; Beddar, Sam
2014-02-01
We designed and constructed an in vivo dosimetry system using plastic scintillation detectors (PSDs) to monitor dose to the rectal wall in patients undergoing intensity-modulated radiation therapy for prostate cancer. Five patients were enrolled in an Institutional Review Board-approved protocol for twice weekly in vivo dose monitoring with our system, resulting in a total of 142 in vivo dose measurements. PSDs were attached to the surface of endorectal balloons used for prostate immobilization to place the PSDs in contact with the rectal wall. Absorbed dose was measured in real time and the total measured dose was compared with the dose calculated by the treatment planning system on the daily computed tomographic image dataset. The mean difference between measured and calculated doses for the entire patient population was -0.4% (standard deviation 2.8%). The mean difference between daily measured and calculated doses for each patient ranged from -3.3% to 3.3% (standard deviation ranged from 5.6% to 7.1% for four patients and was 14.0% for the last, for whom optimal positioning of the detector was difficult owing to the patient's large size). Patients tolerated the detectors well and the treatment workflow was not compromised. Overall, PSDs performed well as in vivo dosimeters, providing excellent accuracy, real-time measurement and reusability.
Wu, Qingqing; Xiang, Shengnan; Wang, Wenjun; Zhao, Jinyan; Xia, Jinhua; Zhen, Yueran; Liu, Bang
2018-05-01
Various detection methods have been developed to date for identification of animal species. New techniques based on PCR approach have raised the hope of developing better identification methods, which can overcome the limitations of the existing methods. PCR-based methods used the mitochondrial DNA (mtDNA) as well as nuclear DNA sequences. In this study, by targeting nuclear DNA, multiplex PCR and real-time PCR methods were developed to assist with qualitative and quantitative analysis. The multiplex PCR was found to simultaneously and effectively distinguish four species (fox, dog, mink, and rabbit) ingredients by the different sizes of electrophoretic bands: 480, 317, 220, and 209 bp. Real-time fluorescent PCR's amplification profiles and standard curves showed good quantitative measurement responses and linearity, as indicated by good repeatability and coefficient of determination R 2 > 0.99. The quantitative results of quaternary DNA mixtures including mink, fox, dog, and rabbit DNA are in line with our expectations: R.D. (relative deviation) varied between 1.98 and 12.23% and R.S.D. (relative standard deviation) varied between 3.06 and 11.51%, both of which are well within the acceptance criterion of ≤ 25%. Combining the two methods is suitable for the rapid identification and accurate quantification of fox-, dog-, mink-, and rabbit-derived ingredients in the animal products.
WASP (Write a Scientific Paper) using Excel -5: Quartiles and standard deviation.
Grech, Victor
2018-03-01
The almost inevitable descriptive statistics exercise that is undergone once data collection is complete, prior to inferential statistics, requires the acquisition of basic descriptors which may include standard deviation and quartiles. This paper provides pointers as to how to do this in Microsoft Excel™ and explains the relationship between the two. Copyright © 2018 Elsevier B.V. All rights reserved.
Validation Test Report for GDEM4
2010-08-19
standard deviations called the Generalized Digital Environmental Model ( GDEM ). The present document describes the development and evaluation of GDEM4...the newest version of GDEM . As part of the evaluation of GDEM4, comparisons are made in this report to GDEM3 and to four other ocean climatologies...depth climatology of temperature and salinity and their standard deviations called the Generalized Digital Environmental Model ( GDEM ). The history of
40 CFR 91.508 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... family may be determined to be in noncompliance for purposes of § 91.510. H = The Action Limit. It is 5.0 × σ, and is a function of the standard deviation, σ. σ = is the sample standard deviation and is... Equation must be final deteriorated test results as defined in § 91.509(c). Ci = max[0 0R (Ci-1 + Xi − (FEL...
NASA Astrophysics Data System (ADS)
Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.
2015-10-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
NASA Astrophysics Data System (ADS)
Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.
2016-03-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
Spectral combination of spherical gravitational curvature boundary-value problems
NASA Astrophysics Data System (ADS)
PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel
2018-04-01
Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bellouin, N.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Ma, X.; Myhre, G.; Penner, J. E.; Randles, C. A.; Samset, B.; Schulz, M.; Takemura, T.; Yu, F.; Yu, H.; Zhou, C.
2013-03-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.47 Wm-2 and the inter-model standard deviation is 0.55 Wm-2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm-2, and the standard deviation increases to 1.01 W-2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45 Wm-2 (8%) clear-sky and 0.62 Wm-2 (11%) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Quantifying the heterogeneity of the tectonic stress field using borehole data
Schoenball, Martin; Davatzes, Nicholas C.
2017-01-01
The heterogeneity of the tectonic stress field is a fundamental property which influences earthquake dynamics and subsurface engineering. Self-similar scaling of stress heterogeneities is frequently assumed to explain characteristics of earthquakes such as the magnitude-frequency relation. However, observational evidence for such scaling of the stress field heterogeneity is scarce.We analyze the local stress orientations using image logs of two closely spaced boreholes in the Coso Geothermal Field with sub-vertical and deviated trajectories, respectively, each spanning about 2 km in depth. Both the mean and the standard deviation of stress orientation indicators (borehole breakouts, drilling-induced fractures and petal-centerline fractures) determined from each borehole agree to the limit of the resolution of our method although measurements at specific depths may not. We find that the standard deviation in these boreholes strongly depends on the interval length analyzed, generally increasing up to a wellbore log length of about 600 m and constant for longer intervals. We find the same behavior in global data from the World Stress Map. This suggests that the standard deviation of stress indicators characterizes the heterogeneity of the tectonic stress field rather than the quality of the stress measurement. A large standard deviation of a stress measurement might be an expression of strong crustal heterogeneity rather than of an unreliable stress determination. Robust characterization of stress heterogeneity requires logs that sample stress indicators along a representative sample volume of at least 1 km.
Venous leg ulcer healing with electric stimulation therapy: a pilot randomised controlled trial.
Miller, C; McGuiness, W; Wilson, S; Cooper, K; Swanson, T; Rooney, D; Piller, N; Woodward, M
2017-03-02
Compression therapy is a gold standard treatment to promote venous leg ulcer (VLU) healing. Concordance with compression therapy is, however, often sub-optimal. The aim of this study was to evaluate the effectiveness of electric stimulation therapy (EST) to facilitate healing of VLUs among people who do not use moderate-to-high levels of compression (>25 mmHg). A pilot multicentre, single-blinded randomised controlled trial was conducted. Participants were randomised (2:1) to the intervention group or a control group where EST or a sham device was used 4 times daily for 20 minutes per session. Participants were monitored fortnightly for eight weeks. The primary outcome measure was percentage of area (wound size) change. In the 23 patients recruited, an average redution in wound size of 23.15% (standard deviation [SD]: 61.23) was observed for the control group compared with 32.67 % (SD: 42.54) for the intervention. A moderate effect size favouring the intervention group was detected from univariate [F(1,18)=1.588, p=0.224, partial eta squared=0.081] and multivariate repeated measures [F(1,18)=2.053, p=0.169, partial eta squared=0.102] analyses. The pilot study was not powered to detect statistical significance, however, the difference in healing outcomes are encouraging. EST may be an effective adjunct treatment among patients who have experienced difficulty adhering to moderate-to-high levels of compression therapy.
2012-01-01
Background Data collection for economic evaluation alongside clinical trials is burdensome and cost-intensive. Limiting both the frequency of data collection and recall periods can solve the problem. As a consequence, gaps in survey periods arise and must be filled appropriately. The aims of our study are to assess the validity of incomplete cost data collection and define suitable resource categories. Methods In the randomised KORINNA study, cost data from 234 elderly patients were collected quarterly over a 1-year period. Different strategies for incomplete data collection were compared with complete data collection. The sample size calculation was modified in response to elasticity of variance. Results Resource categories suitable for incomplete data collection were physiotherapy, ambulatory clinic in hospital, medication, consultations, outpatient nursing service and paid household help. Cost estimation from complete and incomplete data collection showed no difference when omitting information from one quarter. When omitting information from two quarters, costs were underestimated by 3.9% to 4.6%. With respect to the observed increased standard deviation, a larger sample size would be required, increased by 3%. Nevertheless, more time was saved than extra time would be required for additional patients. Conclusion Cost data can be collected efficiently by reducing the frequency of data collection. This can be achieved by incomplete data collection for shortened periods or complete data collection by extending recall windows. In our analysis, cost estimates per year for ambulatory healthcare and non-healthcare services in terms of three data collections was as valid and accurate as a four complete data collections. In contrast, data on hospitalisation, rehabilitation stays and care insurance benefits should be collected for the entire target period, using extended recall windows. When applying the method of incomplete data collection, sample size calculation has to be modified because of the increased standard deviation. This approach is suitable to enable economic evaluation with lower costs to both study participants and investigators. Trial registration The trial registration number is ISRCTN02893746 PMID:22978572
Pardo, Deborah; Jenouvrier, Stéphanie; Weimerskirch, Henri; Barbraud, Christophe
2017-06-19
Climate changes include concurrent changes in environmental mean, variance and extremes, and it is challenging to understand their respective impact on wild populations, especially when contrasted age-dependent responses to climate occur. We assessed how changes in mean and standard deviation of sea surface temperature (SST), frequency and magnitude of warm SST extreme climatic events (ECE) influenced the stochastic population growth rate log( λ s ) and age structure of a black-browed albatross population. For changes in SST around historical levels observed since 1982, changes in standard deviation had a larger (threefold) and negative impact on log( λ s ) compared to changes in mean. By contrast, the mean had a positive impact on log( λ s ). The historical SST mean was lower than the optimal SST value for which log( λ s ) was maximized. Thus, a larger environmental mean increased the occurrence of SST close to this optimum that buffered the negative effect of ECE. This 'climate safety margin' (i.e. difference between optimal and historical climatic conditions) and the specific shape of the population growth rate response to climate for a species determine how ECE affect the population. For a wider range in SST, both the mean and standard deviation had negative impact on log( λ s ), with changes in the mean having a greater effect than the standard deviation. Furthermore, around SST historical levels increases in either mean or standard deviation of the SST distribution led to a younger population, with potentially important conservation implications for black-browed albatrosses.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).
Is standard deviation of daily PM2.5 concentration associated with respiratory mortality?
Lin, Hualiang; Ma, Wenjun; Qiu, Hong; Vaughn, Michael G; Nelson, Erik J; Qian, Zhengmin; Tian, Linwei
2016-09-01
Studies on health effects of air pollution often use daily mean concentration to estimate exposure while ignoring daily variations. This study examined the health effects of daily variation of PM2.5. We calculated daily mean and standard deviations of PM2.5 in Hong Kong between 1998 and 2011. We used a generalized additive model to estimate the association between respiratory mortality and daily mean and variation of PM2.5, as well as their interaction. We controlled for potential confounders, including temporal trends, day of the week, meteorological factors, and gaseous air pollutants. Both daily mean and standard deviation of PM2.5 were significantly associated with mortalities from overall respiratory diseases and pneumonia. Each 10 μg/m(3) increment in daily mean concentration at lag 2 day was associated with a 0.61% (95% CI: 0.19%, 1.03%) increase in overall respiratory mortality and a 0.67% (95% CI: 0.14%, 1.21%) increase in pneumonia mortality. And a 10 μg/m(3) increase in standard deviation at lag 1 day corresponded to a 1.40% (95% CI: 0.35%, 2.46%) increase in overall respiratory mortality, and a 1.80% (95% CI: 0.46%, 3.16%) increase in pneumonia mortality. We also observed a positive but non-significant synergistic interaction between daily mean and variation on respiratory mortality and pneumonia mortality. However, we did not find any significant association with mortality from chronic obstructive pulmonary diseases. Our study suggests that, besides mean concentration, the standard deviation of PM2.5 might be one potential predictor of respiratory mortality in Hong Kong, and should be considered when assessing the respiratory effects of PM2.5. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessment issues in the testing of children at school entry.
Rock, Donald A; Stenner, A Jackson
2005-01-01
The authors introduce readers to the research documenting racial and ethnic gaps in school readiness. They describe the key tests, including the Peabody Picture Vocabulary Test (PPVT), the Early Childhood Longitudinal Study (ECLS), and several intelligence tests, and describe how they have been administered to several important national samples of children. Next, the authors review the different estimates of the gaps and discuss how to interpret these differences. In interpreting test results, researchers use the statistical term "standard deviation" to compare scores across the tests. On average, the tests find a gap of about 1 standard deviation. The ECLS-K estimate is the lowest, about half a standard deviation. The PPVT estimate is the highest, sometimes more than 1 standard deviation. When researchers adjust those gaps statistically to take into account different outside factors that might affect children's test scores, such as family income or home environment, the gap narrows but does not disappear. Why such different estimates of the gap? The authors consider explanations such as differences in the samples, racial or ethnic bias in the tests, and whether the tests reflect different aspects of school "readiness," and conclude that none is likely to explain the varying estimates. Another possible explanation is the Spearman Hypothesis-that all tests are imperfect measures of a general ability construct, g; the more highly a given test correlates with g, the larger the gap will be. But the Spearman Hypothesis, too, leaves questions to be investigated. A gap of 1 standard deviation may not seem large, but the authors show clearly how it results in striking disparities in the performance of black and white students and why it should be of serious concern to policymakers.
Single-Station Sigma for the Iranian Strong Motion Stations
NASA Astrophysics Data System (ADS)
Zafarani, H.; Soghrat, M. R.
2017-11-01
In development of ground motion prediction equations (GMPEs), the residuals are assumed to have a log-normal distribution with a zero mean and a standard deviation, designated as sigma. Sigma has significant effect on evaluation of seismic hazard for designing important infrastructures such as nuclear power plants and dams. Both aleatory and epistemic uncertainties are involved in the sigma parameter. However, ground-motion observations over long time periods are not available at specific sites and the GMPEs have been derived using observed data from multiple sites for a small number of well-recorded earthquakes. Therefore, sigma is dominantly related to the statistics of the spatial variability of ground motion instead of temporal variability at a single point (ergodic assumption). The main purpose of this study is to reduce the variability of the residuals so as to handle it as epistemic uncertainty. In this regard, it is tried to partially apply the non-ergodic assumption by removing repeatable site effects from total variability of six GMPEs driven from the local, Europe-Middle East and worldwide data. For this purpose, we used 1837 acceleration time histories from 374 shallow earthquakes with moment magnitudes ranging from M w 4.0 to 7.3 recorded at 370 stations with at least two recordings per station. According to estimated single-station sigma for the Iranian strong motion stations, the ratio of event-corrected single-station standard deviation ( Φ ss) to within-event standard deviation ( Φ) is about 0.75. In other words, removing the ergodic assumption on site response resulted in 25% reduction of the within-event standard deviation that reduced the total standard deviation by about 15%.
Improving IQ measurement in intellectual disabilities using true deviation from population norms
2014-01-01
Background Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. Methods We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. Results We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Conclusion Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment. PMID:26491488
Improving IQ measurement in intellectual disabilities using true deviation from population norms.
Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David
2014-01-01
Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.
Validation of 10 years of SAO OMI Ozone Profiles with Ozonesonde and MLS Observations
NASA Astrophysics Data System (ADS)
Huang, G.; Liu, X.; Chance, K.; Bhartia, P. K.
2015-12-01
To evaluate the accuracy and long-term stability of the SAO OMI ozone profile product, we validate ~10 years of ozone profile product (Oct. 2004-Dec. 2014) against collocated ozonesonde and MLS data. Ozone profiles as well stratospheric, tropospheric, lower tropospheric ozone columns are compared with ozonesonde data for different latitude bands, and time periods (e.g., 2004-2008/2009-2014 for without/with row anomaly. The mean biases and their standard deviations are also assessed as a function of time to evaluate the long-term stability and bias trends. In the mid-latitude and tropical regions, OMI generally shows good agreement with ozonesonde observations. The mean ozone profile biases are generally within 6% with up to 30% standard deviations. The biases of stratospheric ozone columns (SOC) and tropospheric ozone columns (TOC) are -0.3%-2.2% and -0.2%-3%, while standard deviations are 3.9%-5.8% and 14.4%-16.0%, respectively. However, the retrievals during 2009-2014 show larger standard deviations and larger temporal variations; the standard deviations increase by ~5% in the troposphere and ~2% in the stratosphere. Retrieval biases at individual levels in the stratosphere and upper troposphere show statistically significant trends and different trends for 2004-2008 and 2009-2014 periods. The trends in integrated ozone partial columns are less significant due to cancellation from various layers, except for significant trend in tropical SOC. These results suggest the need to perform time dependent radiometric calibration to maintain the long-term stability of this product. Similarly, we are comparing the OMI stratospheric ozone profiles and SOC with collocated MLS data, and the results will be reported.
Revert Ventura, A J; Sanz Requena, R; Martí-Bonmatí, L; Pallardó, Y; Jornet, J; Gaspar, C
2014-01-01
To study whether the histograms of quantitative parameters of perfusion in MRI obtained from tumor volume and peritumor volume make it possible to grade astrocytomas in vivo. We included 61 patients with histological diagnoses of grade II, III, or IV astrocytomas who underwent T2*-weighted perfusion MRI after intravenous contrast agent injection. We manually selected the tumor volume and peritumor volume and quantified the following perfusion parameters on a voxel-by-voxel basis: blood volume (BV), blood flow (BF), mean transit time (TTM), transfer constant (K(trans)), washout coefficient, interstitial volume, and vascular volume. For each volume, we obtained the corresponding histogram with its mean, standard deviation, and kurtosis (using the standard deviation and kurtosis as measures of heterogeneity) and we compared the differences in each parameter between different grades of tumor. We also calculated the mean and standard deviation of the highest 10% of values. Finally, we performed a multiparametric discriminant analysis to improve the classification. For tumor volume, we found statistically significant differences among the three grades of tumor for the means and standard deviations of BV, BF, and K(trans), both for the entire distribution and for the highest 10% of values. For the peritumor volume, we found no significant differences for any parameters. The discriminant analysis improved the classification slightly. The quantification of the volume parameters of the entire region of the tumor with BV, BF, and K(trans) is useful for grading astrocytomas. The heterogeneity represented by the standard deviation of BF is the most reliable diagnostic parameter for distinguishing between low grade and high grade lesions. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Schupp, C. A.; McNinch, J. E.; List, J. H.; Farris, A. S.
2002-12-01
The formation and behavior of hotspots, or sections of the beach that exhibit markedly higher shoreline change rates than adjacent regions, are poorly understood. Several hotspots have been identified on the Outer Banks, a developed barrier island in North Carolina. To better understand hotspot dynamics and the potential relationship to the geologic framework in which they occur, the surf zone between Duck and Bodie Island was surveyed in June 2002 as part of a research effort supported by the U.S. Geological Survey and U.S. Army Corps of Engineers. Swath bathymetry, sidescan sonar, and chirp seismic were used to characterize a region 40 km long and1 km wide. Hotspot locations were pinpointed using standard deviation values for shoreline position as determined by monthly SWASH buggy surveys of the mean high water contour between October 1999 and September 2002. Observational data and sidescan images were mapped to delineate regions of surficial sediment distributions, and regions of interest were ground-truthed via grab samples or visual inspection. General kilometer-scale correlation between acoustic backscatter and high shoreline standard deviation is evident. Acoustic returns are uniform in a region of Duck where standard deviation is low, but backscatter is patchy around the Kitty Hawk hotspot, where standard deviation is higher. Based on ground-truthing of an area further north, these patches are believed to be an older ravinement surface of fine sediment. More detailed analyses of the correlation between acoustic data, standard deviation, and hotspot locations will be presented. Future work will include integration of seismic, bathymetric, and sidescan data to better understand the links between sub-bottom geology, temporal changes in surficial sediments, surf-zone sediment budgets, and short-term changes in shoreline position and morphology.
Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems
NASA Technical Reports Server (NTRS)
Lustig, P. H.; Holms, A. G.; Davison, H. W.
1973-01-01
The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.
Influence of eye micromotions on spatially resolved refractometry
NASA Astrophysics Data System (ADS)
Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Osipova, Irina Y.
2001-01-01
The influence eye micromotions on the accuracy of estimation of Zernike coefficients form eye transverse aberration measurements was investigated. By computer modeling, the following found eye aberrations have been examined: defocusing, primary astigmatism, spherical aberration of the 3rd and the 5th orders, as well as their combinations. It was determined that the standard deviation of estimated Zernike coefficients is proportional to the standard deviation of angular eye movements. Eye micromotions cause the estimation errors of Zernike coefficients of present aberrations and produce the appearance of Zernike coefficients of aberrations, absent in the eye. When solely defocusing is present, the biggest errors, cased by eye micromotions, are obtained for aberrations like coma and astigmatism. In comparison with other aberrations, spherical aberration of the 3rd and the 5th orders evokes the greatest increase of the standard deviation of other Zernike coefficients.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Khachatryan, Vardan
2015-08-27
A first search is reported for a standard model Higgs boson (H) that is produced through vector boson fusion and decays to a bottom-quark pair. Two data samples, corresponding to integrated luminosities of 19.8 fb -1 and 18.3 fb -1 of proton-proton collisions at √s=8 TeV were selected for this channel at the CERN LHC. The observed significance in these data samples for a H→more » $$\\mathrm{b\\bar{b}}$$ signal at a mass of 125 GeV is 2.2 standard deviations, while the expected significance is 0.8 standard deviations. The fitted signal strength μ=σ/σ SM=2.8 +1.6 -1.4. The combination of this result with other CMS searches for the Higgs boson decaying to a b-quark pair yields a signal strength of 1.0±0.4, corresponding to a signal significance of 2.6 standard deviations for a Higgs boson mass of 125 GeV.« less
Zarbo, Richard J; Copeland, Jacqueline R; Varney, Ruan C
2017-10-01
To develop a business subsystem fulfilling International Organization for Standardization 15189 nonconformance management regulatory standard, facilitating employee engagement in problem identification and resolution to effect quality improvement and risk mitigation. From 2012 to 2016, the integrated laboratories of the Henry Ford Health System used a quality technical team to develop and improve a management subsystem designed to identify, track, trend, and summarize nonconformances based on frequency, risk, and root cause for elimination at the level of the work. Programmatic improvements and training resulted in markedly increased documentation culminating in 71,641 deviations in 2016 classified by a taxonomy of 281 defect types into preanalytic (74.8%), analytic (23.6%), and postanalytic (1.6%) testing phases. The top 10 deviations accounted for 55,843 (78%) of the total. Deviation management is a key subsystem of managers' standard work whereby knowledge of nonconformities assists in directing corrective actions and continuous improvements that promote consistent execution and higher levels of performance. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
[Determination of acetochlor and oxyfluorfen by capillary gas chromatography].
Xiang, Wen-Sheng; Wang, Xiang-Jing; Wang, Jing; Wang, Qing
2002-09-01
A method is described for the determination of acetochlor and oxyfluorfen by capillary gas chromatography with FID and an SE-30 capillary column (60 m x 0.53 mm i. d., 1.5 microm), using dibutyl phthalate as the internal standard. The standard deviations for acetochlor and oxyfluorfen concentration(mass fraction) were 0.44% and 0.47% respectively. The relative standard deviations for acetochlor and oxyfluorfen were 0.79% and 0.88% and the average recoveries for acetochlor and oxyfluorfen were 99.3% and 101.1% respectively. The method is simple, rapid and accurate.
U.S. Navy Marine Climatic Atlas of the World. Volume IX. World-Wide Means and Standard Deviations
1981-10-01
TITLE (..d SobtII,) S. TYPE OF REPORT & PERIOD COVERED U. S. Navy Marine Climatic Atlas of the World Volume IX World-wide Means and Standard Reference...Ives the best estimate of the population standard deviations. The means, , are com~nuted from: EX IIN I 90 80 70 60" 50’ 40, 30 20 10 0 1070 T- VErr ...or 10%, whichever is greater Since the mean ice limit approximates the minus two de l temperature isopleth, this analyzed lower limit was Wave Heights