Sample records for standard deviation calculated

  1. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  2. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  3. 40 CFR 61.207 - Radium-226 sampling and measurement procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...

  4. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  5. Models of Lift and Drag Coefficients of Stalled and Unstalled Airfoils in Wind Turbines and Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    2008-01-01

    Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.

  6. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  7. Verification of calculated skin doses in postmastectomy helical tomotherapy.

    PubMed

    Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth

    2011-10-01

    To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Verification of Calculated Skin Doses in Postmastectomy Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, Shima; Parker, Brent C., E-mail: bcparker@marybird.com; Mary Bird Perkins Cancer Center, Baton Rouge, LA

    2011-10-01

    Purpose: To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). Methods and Materials: In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi.Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. Results: The mean difference and standard errormore » of the mean difference between measurement and calculation for the scar measurements was -1.8% {+-} 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% {+-} 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% {+-} 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. Conclusions: The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%.« less

  9. Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image

    NASA Astrophysics Data System (ADS)

    Demir, N.; Kaynarca, M.; Oy, S.

    2016-06-01

    Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.

  10. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  11. [Study of building quantitative analysis model for chlorophyll in winter wheat with reflective spectrum using MSC-ANN algorithm].

    PubMed

    Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui

    2010-01-01

    Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.

  12. A log-normal distribution model for the molecular weight of aquatic fulvic acids

    USGS Publications Warehouse

    Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.

    2000-01-01

    The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.

  13. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  14. A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.

    PubMed

    Rhiel, G Steven

    2007-02-01

    In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.

  15. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  16. Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John

    Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less

  17. Age-independent anti-Müllerian hormone (AMH) standard deviation scores to estimate ovarian function.

    PubMed

    Helden, Josef van; Weiskirchen, Ralf

    2017-06-01

    To determine single year age-specific anti-Müllerian hormone (AMH) standard deviation scores (SDS) for women associated to normal ovarian function and different ovarian disorders resulting in sub- or infertility. Determination of particular year median and mean AMH values with standard deviations (SD), calculation of age-independent cut off SDS for the discrimination between normal ovarian function and ovarian disorders. Single-year-specific median, mean, and SD values have been evaluated for the Beckman Access AMH immunoassay. While the decrease of both median and mean AMH values is strongly correlated with increasing age, calculated SDS values have been shown to be age independent with the differentiation between normal ovarian function measured as occurred ovulation with sufficient luteal activity compared with hyperandrogenemic cycle disorders or anovulation associated with high AMH values and reduced ovarian activity or insufficiency associated with low AMH, respectively. These results will be helpful for the treatment of patients and the ventilation of the different reproductive options. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Sensitivity of species to chemicals: dose-response characteristics for various test types (LC(50), LR(50) and LD(50)) and modes of action.

    PubMed

    Hendriks, A Jan; Awkerman, Jill A; de Zwart, Dick; Huijbregts, Mark A J

    2013-11-01

    While variable sensitivity of model species to common toxicants has been addressed in previous studies, a systematic analysis of inter-species variability for different test types, modes of action and species is as of yet lacking. Hence, the aim of the present study was to identify similarities and differences in contaminant levels affecting cold-blooded and warm-blooded species administered via different routes. To that end, data on lethal water concentrations LC50, tissue residues LR50 and oral doses LD50 were collected from databases, each representing the largest of its kind. LC50 data were multiplied by a bioconcentration factor (BCF) to convert them to internal concentrations that allow for comparison among species. For each endpoint data set, we calculated the mean and standard deviation of species' lethal level per compound. Next, the means and standard deviations were averaged by mode of action. Both the means and standard deviations calculated depended on the number of species tested, which is at odds with quality standard setting procedures. Means calculated from (BCF) LC50, LR50 and LD50 were largely similar, suggesting that different administration routes roughly yield similar internal levels. Levels for compounds interfering biochemically with elementary life processes were about one order of magnitude below that of narcotics disturbing membranes, and neurotoxic pesticides and dioxins induced death in even lower amounts. Standard deviations for LD50 data were similar across modes of action, while variability of LC50 values was lower for narcotics than for substances with a specific mode of action. The study indicates several directions to go for efficient use of available data in risk assessment and reduction of species testing. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Host model uncertainties in aerosol radiative forcing estimates: results from the AeroCom prescribed intercomparison study

    NASA Astrophysics Data System (ADS)

    Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.

    2012-09-01

    Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.

  20. SU-E-I-59: Investigation of the Usefulness of a Standard Deviation and Mammary Gland Density as Indexes for Mammogram Classification.

    PubMed

    Takarabe, S; Yabuuchi, H; Morishita, J

    2012-06-01

    To investigate the usefulness of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high- density mammary glands region to a whole mammary glands region as features for classification of mammograms into four categories based on the ACR BI-RADS breast composition. We used 36 digital mediolateral oblique view mammograms (18 patients) approved by our IRB. These images were classified into the four categories of breast compositions by an experienced breast radiologist and the results of the classification were regarded as a gold standard. First, a whole mammary region in a breast was divided into two regions such as a high-density mammary glands region and a low/iso-density mammary glands region by using a threshold value that was obtained from the pixel values corresponding to a pectoral muscle region. Then the percentage of a high-density mammary glands region to a whole mammary glands region was calculated. In addition, as a new method, the standard deviation of pixel values in a whole mammary glands region was calculated as an index based on the intermingling of mammary glands and fats. Finally, all mammograms were classified by using the combination of the percentage of a high-density mammary glands region and the standard deviation of each image. The agreement rates of the classification between our proposed method and gold standard was 86% (31/36). This result signified that our method has the potential to classify mammograms. The combination of the standard deviation of pixel values in a whole mammary glands region and the percentage of a high-density mammary glands region to a whole mammary glands region was available as features to classify mammograms based on the ACR BI- RADS breast composition. © 2012 American Association of Physicists in Medicine.

  1. SU-F-T-564: 3 Year Experience of Treatment Plan QualityAssurance for Vero SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Z; Li, Z; Mamalui, M

    2016-06-15

    Purpose: To verify treatment plan monitor units from iPlan treatment planning system for Vero Stereotactic Body Radiotherapy (SBRT) treatment using both software-based and (homogeneous and heterogeneous) phantom-based approaches. Methods: Dynamic conformal arcs (DCA) were used for SBRT treatment of oligometastasis patients using Vero linear accelerator. For each plan, Monte Carlo calculated treatment plans MU (prescribed dose to water with 1% variance) is verified first by RadCalc software with 3% difference threshold. Beyond 3% differences, treatment plans were copied onto (homogeneous) Scanditronix phantom for non-lung patients and copied onto (heterogeneous) CIRS phantom for lung patients and the corresponding plan dose wasmore » measured using a cc01 ion chamber. The difference between the planed and measured dose was recorded. For the past 3 years, we have treated 180 patients with 315 targets. Out of these patients, 99 targets treatment plan RadCalc calculation exceeded 3% threshold and phantom based measurements were performed with 26 plans using Scanditronix phantom and 73 plans using CIRS phantom. Mean and standard deviation of the dose differences were obtained and presented. Results: For all patient RadCalc calculations, the mean dose difference is 0.76% with a standard deviation of 5.97%. For non-lung patient plan Scanditronix phantom measurements, the mean dose difference is 0.54% with standard deviation of 2.53%; for lung patient plan CIRS phantom measurements, the mean dose difference is −0.04% with a standard deviation of 1.09%; The maximum dose difference is 3.47% for Scanditronix phantom measurements and 3.08% for CIRS phantom measurements. Conclusion: Limitations in secondary MU check software lead to perceived large dose discrepancies for some of the lung patient SBRT treatment plans. Homogeneous and heterogeneous phantoms were used in plan quality assurance for non-lung patients and lung patients, respectively. Phantom based QA showed the relative good agreement between iPlan calculated dose and measured dose.« less

  2. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  3. Characterization of solar cells for space applications. Volume 5: Electrical characteristics of OCLI 225-micron MLAR wraparound cells as a function of intensity, temperature, and irradiation

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.

    1979-01-01

    Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.

  4. Analysis of Thermal Design of Heating Units with Meteorological Climate Peculiarities

    NASA Astrophysics Data System (ADS)

    Seminenko, A. S.; Elistratova, Y. V.; Pererva, M. I.; Moiseev, M. V.

    2018-03-01

    This article is devoted to the analysis of thermal design of heating units, one of the compulsory calculations of heating systems, which ensures their stable and efficient operation. The article analyses the option of a single-pipe heating system with shifted end-capping areas and the overhead supply main; the difference is shown in the calculation results between heat balance equation of the heating unit and calculation of the actual heat flux (heat transfer coefficient) taking into account deviation from the standardized (technical passport) operating conditions. The calculation of the thermal conditions of residential premises is given, the deviation of the internal air temperature is shown taking into account the discrepancy between the calculation results for thermal energy.

  5. Host model uncertainties in aerosol radiative forcing estimates: results from the AeroCom Prescribed intercomparison study

    NASA Astrophysics Data System (ADS)

    Stier, P.; Schutgens, N. A. J.; Bellouin, N.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Ma, X.; Myhre, G.; Penner, J. E.; Randles, C. A.; Samset, B.; Schulz, M.; Takemura, T.; Yu, F.; Yu, H.; Zhou, C.

    2013-03-01

    Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.47 Wm-2 and the inter-model standard deviation is 0.55 Wm-2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm-2, and the standard deviation increases to 1.01 W-2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45 Wm-2 (8%) clear-sky and 0.62 Wm-2 (11%) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.

  6. Comparison of Ionospheric Scintillation Statistics from the North Atlantic and Alaskan Sectors of the Auroral Oval Using the Wideband Satellite

    DTIC Science & Technology

    1981-09-15

    Deviation Standard deviation of detrended phase component is calculated as 2 - 1/2 in radians, as measured at the receiver output, and not corrected for...next section, were calculated they were corrected for the finite receiver reference frequency of;. f-2 402 MI~z in the following manner. Assuming a...for quiet and disturbed times. The position of the geometrica ,’. enhancement for individual cases is between 60-61°A rather than betwee.o 63-640 A as

  7. A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.

    PubMed

    McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B

    2017-02-01

    We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.

  8. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    PubMed

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Range and Energy Straggling in Ion Beam Transport

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tai, Hsiang

    2000-01-01

    A first-order approximation to the range and energy straggling of ion beams is given as a normal distribution for which the standard deviation is estimated from the fluctuations in energy loss events. The standard deviation is calculated by assuming scattering from free electrons with a long range cutoff parameter that depends on the mean excitation energy of the medium. The present formalism is derived by extrapolating Payne's formalism to low energy by systematic energy scaling and to greater depths of penetration by a second-order perturbation. Limited comparisons are made with experimental data.

  10. Eye micromotions influence on an error of Zernike coefficients reconstruction in the one-ray refractometry of an eye

    NASA Astrophysics Data System (ADS)

    Osipova, Irina Y.; Chyzh, Igor H.

    2001-06-01

    The influence of eye jumps on the accuracy of estimation of Zernike coefficients from eye transverse aberration measurements was investigated. By computer modeling the ametropy and astigmatism have been examined. The standard deviation of the wave aberration function was calculated. It was determined that the standard deviation of the wave aberration function achieves the minimum value if the number of scanning points is equal to the number of eye jumps in scanning period. The recommendations for duration of measurement were worked out.

  11. Ultrasonic imaging system for in-process fabric defect detection

    DOEpatents

    Sheen, Shuh-Haw; Chien, Hual-Te; Lawrence, William P.; Raptis, Apostolos C.

    1997-01-01

    An ultrasonic method and system are provided for monitoring a fabric to identify a defect. A plurality of ultrasonic transmitters generate ultrasonic waves relative to the fabric. An ultrasonic receiver means responsive to the generated ultrasonic waves from the transmitters receives ultrasonic waves coupled through the fabric and generates a signal. An integrated peak value of the generated signal is applied to a digital signal processor and is digitized. The digitized signal is processed to identify a defect in the fabric. The digitized signal processing includes a median value filtering step to filter out high frequency noise. Then a mean value and standard deviation of the median value filtered signal is calculated. The calculated mean value and standard deviation are compared with predetermined threshold values to identify a defect in the fabric.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, H; Yu, N; Qi, P

    Purpose: In commercial secondary dose calculation system, an average effective depth is used to calculate the Monitor Units for an arc beam from the volumetric modulated arc (VMAT) plans. Typically, an arithmetic mean of the effective depths (AMED) of a VMAT arc beam is used, which may result in large MU discrepancy from that of the primary treatment planning system. This study is to demonstrate the use of a dose weighted mean effective depth (DWED) can improve accuracy of MU calculation for the secondary MU verification. Methods: In-house scripts were written in the primary treatment planning system (TPS) to firstmore » convert a VMAT arc beam to a series of static step & shoot beams (every 4 degree). The computed dose and effective depth of each static beam were then used to obtain the dose weighted mean effective depth (DWED) for the VMAT beam. The DWED was used for the secondary MU calculation for VMAT plans. Six lung SBRT VMAT plans, eight head and neck VMAT plans and ten prostate VMAT plans that had > 5% MU deviations (failed MU verification) using the AMED method were recalculated with the DWED. For comparison, same number VMAT plans that had < 5% MU deviations (passed MU verification) using AMED method were also reevaluated with the dose weighted mean effective depth method. Results: For MU verification passed plans, the mean and standard deviation of MU differences between the TPS and the secondary calculation program were 2.2%±1.5% for the AMED and 2.1%±1.7% for the DMED method. For the failed plans, the mean and standard deviation of MU differences of TPS to the secondary calculation program were 9.9%±4.7% and 4.7%±2.6, respectively. Conclusion: The dose weighted mean effective depth improved MU calculation accuracy which can be used for the pre-treatment MU verification of VMAT plans.« less

  13. Determining Normal-Distribution Tolerance Bounds Graphically

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.

  14. Soil bulk density and soil moisture calculated with a FORTRAN 77 program.

    Treesearch

    G.L. Starr; J.M. Geist

    1988-01-01

    This paper presents an improved version of BDEN, an interactive computer program written in FORTRAN 77 that will calculate soil bulk density and moisture percentage by weight and volume. Calculations allow for deducting coarse fragment weight and volume. The program will also summarize the resulting data by giving the mean, standard deviation, and 95-percent confidence...

  15. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  16. Filling the voids in the SRTM elevation model — A TIN-based delta surface approach

    NASA Astrophysics Data System (ADS)

    Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas

    The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.

  17. Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.

    PubMed

    Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O

    2009-04-01

    Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.

  18. Discovery of Finely Structured Dynamic Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to be smoothly varying, i.e. have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70 percent of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.

  19. DISCOVERY OF FINELY STRUCTURED DYNAMIC SOLAR CORONA OBSERVED IN THE Hi-C TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Cirtain, Jonathan; Savage, Sabrina

    In the Summer of 2012, the High-resolution Coronal Imager (Hi-C) flew on board a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to bemore » smoothly varying, i.e., have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70% of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.« less

  20. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  1. Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan

    2016-09-01

    Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.

  2. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests

    EPA Science Inventory

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...

  3. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    NASA Astrophysics Data System (ADS)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  4. BDEN: A timesaving computer program for calculating soil bulk density and water content.

    Treesearch

    Lynn G. Starr; Michael J. Geist

    1983-01-01

    This paper presents an interactive computer program written in BASIC language that will calculate soil bulk density and moisture percentage by weight and volume. Coarse fragment weights are required. The program will also summarize the resulting data giving mean, standard deviation, and 95-percent confidence interval on one or more groupings of data.

  5. Acoustic Fluctuations: Guidelines for R and D Based on the Acoustic Fluctuation Workshop 22-23 February 1978

    DTIC Science & Technology

    1978-11-28

    Noise was sponsored by CNO (OP-95) and supported by Chief of Naval Research (CNR) and held at Woods Hole Oceano - graphic Institute (WHOI) in October...SURFACE ARRAY 1 Sol ’ ARRAY 2 S~BOTTOM (C) Calculate standard deviation of phase-difference fluctuations as a function of integration time, Calculate

  6. Refractive index, molar refraction and comparative refractive index study of propylene carbonate binary liquid mixtures.

    PubMed

    Wankhede, Dnyaneshwar Shamrao

    2012-06-01

    Refractive indices (n) have been experimentally determined for the binary liquid-liquid mixtures of Propylene carbonate (PC) (1) with benzene, ethylbenzene, o-xylene and p-xylene (2) at 298.15, 303.15 and 308.15 K over the entire mole fraction range. The experimental values of n are utilised to calculate deviation in refractive index (Δn), molar refraction (R) and deviation in molar refraction (ΔR). A comparative study of Arago-Biot (A-B), Newton (NW), Eyring and John (E-J) equations for determining refractive index of a liquid has been carried out to test their validity for all the binary mixtures over the entire composition range at 298.15 K. Comparison of various mixing relations is represented in terms of average deviation (AVD). The Δn and ΔR values have been fitted to Redlich-Kister equation at 298.15 K and standard deviations have been calculated. The results are discussed in terms of intermolecular interactions present amongst the components.

  7. Aerosol Measurements in the Mid-Atlantic: Trends and Uncertainty

    NASA Astrophysics Data System (ADS)

    Hains, J. C.; Chen, L. A.; Taubman, B. F.; Dickerson, R. R.

    2006-05-01

    Elevated levels of PM2.5 are associated with cardiovascular and respiratory problems and even increased mortality rates. In 2002 we ran two commonly used PM2.5 speciation samplers (an IMPROVE sampler and an EPA sampler) in parallel at Fort Meade, Maryland (a suburban site located in the Baltimore- Washington urban corridor). The filters were analyzed at different labs. This experiment allowed us to calculate the 'real world' uncertainties associated with these instruments. The EPA method retrieved a January average PM2.5 mass of 9.3 μg/m3 with a standard deviation of 2.8 μg/m3, while the IMPROVE method retrieved an average mass of 7.3 μg/m3 with a standard deviation of 2.1 μg/m3. The EPA method retrieved a July average PM2.5 mass of 26.4 μg/m3 with a standard deviation of 14.6 μg/m3, while the IMPROVE method retrieved an average mass of 23.3 μg/m3 with a standard deviation of 13.0 μg/m3. We calculated a 5% uncertainty associated with the EPA and IMPROVE methods that accounts for uncertainties in flow control strategies and laboratory analysis. The RMS difference between the two methods in January was 2.1 μg/m3, which is about 25% of the monthly average mass and greater than the uncertainty we calculated. In July the RMS difference between the two methods was 5.2 μg/m3, about 20% of the monthly average mass, and greater than the uncertainty we calculated. The EPA methods retrieve consistently higher concentrations of PM2.5 than the IMPROVE methods on a daily basis in January and July. This suggests a systematic bias possibly resulting from contamination of either of the sampling methods. We reconstructed the mass and found that both samplers have good correlation between reconstructed and gravimetric mass, though the IMPROVE method has slightly better correlation than the EPA method. In January, organic carbon is the largest contributor to PM2.5 mass, and in July both sulfate and organic matter contribute substantially to PM2.5. Source apportionment models suggest that regional and local power plants are the major sources of sulfate, while mobile and vegetative burning factors are the major sources of organic carbon.

  8. Stopping characteristics of boron and indium ions in silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veselov, D. S., E-mail: DSVeselov@mephi.ru; Voronov, Yu. A.

    2016-12-15

    The mean range and its standard deviation are calculated for boron ions implanted into silicon with energies below 10 keV. Similar characteristics are calculated for indium ions with energies below 200 keV. The obtained results are presented in tabular and graphical forms. These results may help in the assessment of conditions of production of integrated circuits with nanometer-sized elements.

  9. Assessment of Vegetation Destruction Due to Wenchuan Earthquake and Its Recovery Process Using MODIS Data

    NASA Astrophysics Data System (ADS)

    Zou, Z.; Xiao, X.

    2015-12-01

    With a high temporal resolution and a large covering area, MODIS data are particularly useful in assessing vegetation destruction and recovery of a wide range of areas. In this study, MOD13Q1 data of the growing season (Mar. to Nov.) are used to calculate the Maximum NDVI (NDVImax) of each year. This study calculates each pixel's mean and standard deviation of the NDVImaxs in the 8 years before the earthquake. If the pixel's NDVImax of 2008 is two standard deviation smaller than the mean NDVImax, this pixel is detected as a vegetation destructed pixel. For each vegetation destructed pixel, its similar pixels of the same vegetation type are selected within the latitude difference of 0.5 degrees, altitude difference of 100 meters and slope difference of 3 degrees. Then the NDVImax difference of each vegetation destructed pixel and its similar pixels are calculated. The 5 similar pixels with the smallest NDVImax difference in the 8 years before the earthquake are selected as reference pixels. The mean NDVImaxs of these reference pixels after the earthquake are calculated and serve as the criterion to assess the vegetation recovery process.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Y; Lacroix, F; Lavallee, M

    Purpose: To evaluate the commercially released Collapsed Cone convolution-based(CCC) dose calculation module of the Elekta OncentraBrachy(OcB) treatment planning system(TPS). Methods: An allwater phantom was used to perform TG43 benchmarks with single source and seventeen sources, separately. Furthermore, four real-patient heterogeneous geometries (chestwall, lung, breast and prostate) were used. They were selected based on their clinical representativity of a class of clinical anatomies that pose clear challenges. The plans were used as is(no modification). For each case, TG43 and CCC calculations were performed in the OcB TPS, with TG186-recommended materials properly assigned to ROIs. For comparison, Monte Carlo simulation was runmore » for each case with the same material scheme and grid mesh as TPS calculations. Both modes of CCC (standard and high quality) were tested. Results: For the benchmark case, the CCC dose, when divided by that of TG43, yields hot-n-cold spots in a radial pattern. The pattern of the high mode is denser than that of the standard mode and is representative of angular dicretization. The total deviation ((hot-cold)/TG43) is 18% for standard mode and 11% for high mode. Seventeen dwell positions help to reduce “ray-effect”, with the total deviation to 6% (standard) and 5% (high), respectively. For the four patient cases, CCC produces, as expected, more realistic dose distributions than TG43. A close agreement was observed between CCC and MC for all isodose lines, from 20% and up; the 10% isodose line of CCC appears shifted compared to that of MC. The DVH plots show dose deviations of CCC from MC in small volume, high dose regions (>100% isodose). For patient cases, the difference between standard and high modes is almost undiscernable. Conclusion: OncentraBrachy CCC algorithm marks a significant dosimetry improvement relative to TG43 in real-patient cases. Further researches are recommended regarding the clinical implications of the above observations. Support provided by a CIHR grant and CCC system provided by Elekta-Nucletron.« less

  11. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  12. Volumetric and viscometric study of molecular interactions in the mixtures of some secondary alcohols with equimolar mixture of ethanol and N, N-dimethylacetamide at 308.15 K

    NASA Astrophysics Data System (ADS)

    Sreekanth, K.; Sravana Kumar, D.; Kondaiah, M.; Krishna Rao, D.

    2011-02-01

    Densities and viscosities of mixtures of isopropanol, isobutanol and isoamylalcohol with equimolar mixture of ethanol and N, N-dimethylacetamide have been measured at 308.15 K over the entire composition range. Deviations in viscosity, excess molar volume and excess Gibb’s free energy of activation of viscous flow have been calculated from the experimental values of densities and viscosities. Excess properties have been fitted to the Redlich-Kister type polynomial equation and the corresponding standard deviations have been calculated. The experimental data of viscosity have been used to test the applicability of empirical relations of Grunberg-Nissan, Hind-McLaughlin, Katti-Chaudhary and Heric-Brewer for the systems studied. Molecular interactions in the liquid mixtures have been investigated in the light of variation of deviation and of excess values in evaluated properties.

  13. Nuclear isospin effect on α-decay half-lives

    NASA Astrophysics Data System (ADS)

    Akrawy, Dashty T.; Hassanabadi, H.; Hosseini, S. S.; Santhosh, K. P.

    2018-07-01

    The α-decay half-lives for the even-even, even-odd, odd-even and odd-odd of 356 nuclei in the range 52 ≤Zp ≤ 118 have been studied within the analytical formula of Royer and also within the modified analytical formula of Royer. We calculated the new coefficient of the Royer by fitting 356 isotopes. Also, we considered the Denisov and Khudenko formula and obtained the new coefficient for the modified Denisov and Khudenko formula. We calculated the standard deviation and the average deviation. Analytical results are compared with the experimental data. The results are in better agreement with the experimental data when the effect of the isospin considered for the parent nuclei.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, D; Meier, J; Mawlawi, O

    Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less

  15. Do Practical Standard Coupled Cluster Calculations Agree Better than Kohn–Sham Calculations with Currently Available Functionals When Compared to the Best Available Experimental Data for Dissociation Energies of Bonds to 3d Transition Metals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng

    2015-05-12

    Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less

  16. Analysis of vertical distributions and effective flight layers of insects: three-dimensional simulation of flying insects and catch at trap heights

    USDA-ARS?s Scientific Manuscript database

    The mean height and standard deviation (SD) of flight is calculated for over 100 insect species from their catches on trap heights reported in the literature. The iterative equations for calculating mean height and SD are presented. The mean flight height for 95% of the studies varied from 0.17 to 5...

  17. Evolving geometrical heterogeneities of fault trace data

    NASA Astrophysics Data System (ADS)

    Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari

    2010-08-01

    We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.

  18. [Effect strength variation in the single group pre-post study design: a critical review].

    PubMed

    Maier-Riehle, B; Zwingmann, C

    2000-08-01

    In Germany, studies in rehabilitation research--in particular evaluation studies and examinations of quality of outcome--have so far mostly been executed according to the uncontrolled one-group pre-post design. Assessment of outcome is usually made by comparing the pre- and post-treatment means of the outcome variables. The pre-post differences are checked, and in case of significance, the results are increasingly presented in form of effect sizes. For this reason, this contribution presents different effect size indices used for the one-group pre-post design--in spite of fundamental doubts which exist in relation to that design due to its limited internal validity. The numerator concerning all effect size indices of the one-group pre-post design is defined as difference between the pre- and post-treatment means, whereas there are different possibilities and recommendations with regard to the denominator and hence the standard deviation that serves as the basis for standardizing the difference of the means. Used above all are standardization oriented towards the standard deviation of the pre-treatment scores, standardization oriented towards the pooled standard deviation of the pre- and post-treatment scores, and standardization oriented towards the standard deviation of the pre-post differences. Two examples are given to demonstrate that the different modes of calculating effect size indices in the one-group pre-post design may lead to very different outcome patterns. Additionally, it is pointed out that effect sizes from the uncontrolled one-group pre-post design generally tend to be higher than effect sizes from studies conducted with control groups. Finally, the pros and cons of the different effect size indices are discussed and recommendations are given.

  19. Reductions in the variations of respiration signals for respiratory-gated radiotherapy when using the video-coaching respiration guiding system

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Jeong; Yea, Ji Woon; Oh, Se An

    2015-07-01

    Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT by using a video-coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by using a realtime position management (RPM) respiratory gating system (Varian, USA), and the patients were trained using the video-coaching respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and the standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and the displacement. The standard deviation of the guided breathing decreased to 48.8% in the inhale peak and 24.2% in the exhale peak compared with the values for the free breathing of patient 6. The standard deviation of the respiratory cycle was found to be decreased when using the respiratory guiding system. The respiratory regularity was significantly improved when using the video-coaching respiration guiding system. Therefore, the system is useful for improving the accuracy and the efficiency of RGRT.

  20. Is standard deviation of daily PM2.5 concentration associated with respiratory mortality?

    PubMed

    Lin, Hualiang; Ma, Wenjun; Qiu, Hong; Vaughn, Michael G; Nelson, Erik J; Qian, Zhengmin; Tian, Linwei

    2016-09-01

    Studies on health effects of air pollution often use daily mean concentration to estimate exposure while ignoring daily variations. This study examined the health effects of daily variation of PM2.5. We calculated daily mean and standard deviations of PM2.5 in Hong Kong between 1998 and 2011. We used a generalized additive model to estimate the association between respiratory mortality and daily mean and variation of PM2.5, as well as their interaction. We controlled for potential confounders, including temporal trends, day of the week, meteorological factors, and gaseous air pollutants. Both daily mean and standard deviation of PM2.5 were significantly associated with mortalities from overall respiratory diseases and pneumonia. Each 10 μg/m(3) increment in daily mean concentration at lag 2 day was associated with a 0.61% (95% CI: 0.19%, 1.03%) increase in overall respiratory mortality and a 0.67% (95% CI: 0.14%, 1.21%) increase in pneumonia mortality. And a 10 μg/m(3) increase in standard deviation at lag 1 day corresponded to a 1.40% (95% CI: 0.35%, 2.46%) increase in overall respiratory mortality, and a 1.80% (95% CI: 0.46%, 3.16%) increase in pneumonia mortality. We also observed a positive but non-significant synergistic interaction between daily mean and variation on respiratory mortality and pneumonia mortality. However, we did not find any significant association with mortality from chronic obstructive pulmonary diseases. Our study suggests that, besides mean concentration, the standard deviation of PM2.5 might be one potential predictor of respiratory mortality in Hong Kong, and should be considered when assessing the respiratory effects of PM2.5. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. [The heterogeneity of blood flow on magnetic resonance imaging: a biomarker for grading cerebral astrocytomas].

    PubMed

    Revert Ventura, A J; Sanz Requena, R; Martí-Bonmatí, L; Pallardó, Y; Jornet, J; Gaspar, C

    2014-01-01

    To study whether the histograms of quantitative parameters of perfusion in MRI obtained from tumor volume and peritumor volume make it possible to grade astrocytomas in vivo. We included 61 patients with histological diagnoses of grade II, III, or IV astrocytomas who underwent T2*-weighted perfusion MRI after intravenous contrast agent injection. We manually selected the tumor volume and peritumor volume and quantified the following perfusion parameters on a voxel-by-voxel basis: blood volume (BV), blood flow (BF), mean transit time (TTM), transfer constant (K(trans)), washout coefficient, interstitial volume, and vascular volume. For each volume, we obtained the corresponding histogram with its mean, standard deviation, and kurtosis (using the standard deviation and kurtosis as measures of heterogeneity) and we compared the differences in each parameter between different grades of tumor. We also calculated the mean and standard deviation of the highest 10% of values. Finally, we performed a multiparametric discriminant analysis to improve the classification. For tumor volume, we found statistically significant differences among the three grades of tumor for the means and standard deviations of BV, BF, and K(trans), both for the entire distribution and for the highest 10% of values. For the peritumor volume, we found no significant differences for any parameters. The discriminant analysis improved the classification slightly. The quantification of the volume parameters of the entire region of the tumor with BV, BF, and K(trans) is useful for grading astrocytomas. The heterogeneity represented by the standard deviation of BF is the most reliable diagnostic parameter for distinguishing between low grade and high grade lesions. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  2. Assessing the stock market volatility for different sectors in Malaysia by using standard deviation and EWMA methods

    NASA Astrophysics Data System (ADS)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-11-01

    Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.

  3. CAN'T MISS--conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.

  4. A comparison of force fields and calculation methods for vibration intervals of isotopic H3(+) molecules

    NASA Astrophysics Data System (ADS)

    Carney, G. D.; Adler-Golden, S. M.; Lesseski, D. C.

    1986-04-01

    This paper reports (1) improved values for low-lying vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) calculated using the variational method and Simons-Parr-Finlan (1973) representations of the Carney-Porter (1976) and Dykstra-Swope (1979) ab initio H3(+) potential energy surfaces, (2) quartic normal coordinate force fields for isotopic H3(+) molecules, (3) comparisons of variational and second-order perturbation theory, and (4) convergence properties of the Lai-Hagstrom internal coordinate vibrational Hamiltonian. Standard deviations between experimental and ab initio fundamental vibration intervals of H3(+), H2D(+), D2H(+), and D3(+) for these potential surfaces are 6.9 (Carney-Porter) and 1.2/cm (Dykstra-Swope). The standard deviations between perturbation theory and exact variational fundamentals are 5 and 10/cm for the respective surfaces. The internal coordinate Hamiltonian is found to be less efficient than the previously employed 't' coordinate Hamiltonian for these molecules, except in the case of H2D(+).

  5. Assessing factors that influence deviations between measured and calculated reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Rodny, Marek; Nolz, Reinhard

    2017-04-01

    Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and separated into positive and negative values. For further interpretation, we calculated daily sums of these values. The respective daily difference (positive or negative) served as independent variable (x) in linear correlation with a selected parameter as dependent variable (y). Quality of correlation was evaluated by means of coefficients of determination (R2). When ET0_lys > ET0_ref, the differences were only weakly correlated with the selected parameters. Hence, the evaluation of the causal processes leading to underestimation of measured hourly ET0 seems to require a more rigorous approach. On the other hand, when ET0_lys < ET0_ref, the differences correlated considerably with the meteorological parameters and related system parameters. Interpreting the particular correlations in detail indicated different (or varying) surface characteristics between the irrigated lysimeter and the nearby (non-irrigated) meteorological station.

  6. [Correlation of pure tone thresholds and hearing loss for numbers. Comparison of three calculation variations for plausibility checking in expertise].

    PubMed

    Braun, T; Dochtermann, S; Krause, E; Schmidt, M; Schorn, K; Hempel, J M

    2011-09-01

    The present study analyzes the best combination of frequencies for the calculation of mean hearing loss in pure tone threshold audiometry for correlation with hearing loss for numbers in speech audiometry, since the literature describes different calculation variations for plausibility checking in expertise. Three calculation variations, A (250, 500 and 1000 Hz), B (500 and 1000 Hz) and C (500, 1000 and 2000 Hz), were compared. Audiograms in 80 patients with normal hearing, 106 patients with hearing loss and 135 expertise patients were analyzed in a retrospective manner. Differences between mean pure tone audiometry thresholds and hearing loss for numbers were calculated and statistically compared separately for the right and the left ear in the three patient collectives. We found the calculation variation A to be the best combination of frequencies, since it yielded the smallest standard deviations while being statistically different to calculation variations B and C. The 1- and 2.58-fold standard deviation (representing 68.3% and 99.0% of all values) was ±4.6 and ±11.8 dB for calculation variation A in patients with hearing loss, respectively. For plausibility checking in expertise, the mean threshold from the frequencies 250, 500 and 1000 Hz should be compared to the hearing loss for numbers. The common recommendation reported by the literature to doubt plausibility when the difference of these values exceeds ±5 dB is too strict as shown by this study.

  7. The SEASAT altimeter wet tropospheric range correction revisited

    NASA Technical Reports Server (NTRS)

    Tapley, D. B.; Lundberg, J. B.; Born, G. H.

    1984-01-01

    An expanded set of radiosonde observations was used to calculate the wet tropospheric range correction for the brightness temperature measurements of the SEASAT scanning multichannel microwave radiometer (SMMR). The accuracy of the conventional algorithm for wet tropospheric range correction was evaluated. On the basis of the expanded observational data set, the algorithm was found to have a bias of about 1.0 cm, and a standard deviation 2.8 cm. In order to improve the algorithm, the exact linear, quadratic and logarithmic relationships between brightness temperatures and range corrections were determined. Various combinations of measurement parameters were used to reduce the standard deviation between SEASAT SMMR and radiosonde observations to about 2.1 cm. The performance of various range correction formulas is compared in a table.

  8. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  9. Method for estimating effects of unknown correlations in spectral irradiance data on uncertainties of spectrally integrated colorimetric quantities

    NASA Astrophysics Data System (ADS)

    Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki

    2017-08-01

    Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.

  10. Introductory Linear Regression Programs in Undergraduate Chemistry.

    ERIC Educational Resources Information Center

    Gale, Robert J.

    1982-01-01

    Presented are simple programs in BASIC and FORTRAN to apply the method of least squares. They calculate gradients and intercepts and express errors as standard deviations. An introduction of undergraduate students to such programs in a chemistry class is reviewed, and issues instructors should be aware of are noted. (MP)

  11. Relation between Birth Weight and Intraoperative Hemorrhage during Cesarean Section in Pregnancy with Placenta Previa

    PubMed Central

    Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi

    2016-01-01

    Background Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. Materials and Methods We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Results Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was −0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >−0.33 (odds ratio, 5.88; 95% confidence interval, 3.04–12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79–6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85–92.13) were independent risk of intraoperative massive hemorrhage. Conclusion In patients with placenta previa, a birth weight >−0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa. PMID:27902772

  12. Relation between Birth Weight and Intraoperative Hemorrhage during Cesarean Section in Pregnancy with Placenta Previa.

    PubMed

    Soyama, Hiroaki; Miyamoto, Morikazu; Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi

    2016-01-01

    Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was -0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >-0.33 (odds ratio, 5.88; 95% confidence interval, 3.04-12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79-6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85-92.13) were independent risk of intraoperative massive hemorrhage. In patients with placenta previa, a birth weight >-0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa.

  13. The Uncertain Geographic Context Problem in the Analysis of the Relationships between Obesity and the Built Environment in Guangzhou

    PubMed Central

    Zhao, Pengxiang; Zhou, Suhong

    2018-01-01

    Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals’ activity space. First, a survey was conducted to collect individuals’ daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment. PMID:29439392

  14. Role of a Standardized Prism Under Cover Test in the Assessment of Dissociated Vertical Deviation.

    PubMed

    Klaehn, Lindsay D; Hatt, Sarah R; Leske, David A; Holmes, Jonathan M

    2018-03-01

    Dissociated vertical deviation (DVD) is commonly measured using a prism and alternate cover test (PACT), but some providers use a prism under cover test (PUCT). The aim of this study was to compare a standardized PUCT measurement with a PACT measurement, for assessing the magnitude of DVD. Thirty-six patients with a clinical diagnosis of DVD underwent measurement of the angle of deviation with the PACT, fixing with the habitually fixing eye, and with PUCT, fixing both right and left eyes. The PUCT was standardized, using a 10-second cover for each prism magnitude, until the deviation was neutralized. The magnitude of hyperdeviation by PACT and PUCT was compared for the non-fixing eye, using paired non-parametric tests. The frequency of discrepancies more than 4 prism diopters (PD) between PACT and PUCT was calculated. The magnitude of hyperdeviation was greater when measured with PUCT (range 8PD hypodeviation to 20PD hyperdeviation) vs. PACT (18PD hypodeviation to 25PD hyperdeviation) with a median difference of 4.5PD (range -5PD to 21PD); P < 0.0001. Eighteen (50%) of 36 measurements elicited >4PD hyperdeviation (or >4PD less hypodeviation) by PUCT than by PACT. A standardized 10-second PUCT yields greater values than a prism and alternate cover test in the majority of patients with DVD, providing better quantification of the severity of DVD, which may be important for management decisions.

  15. Impact of combustion products from Space Shuttle launches on ambient air quality

    NASA Technical Reports Server (NTRS)

    Dumbauld, R. K.; Bowers, J. F.; Cramer, H. E.

    1974-01-01

    The present work describes some multilayer diffusion models and a computer program for these models developed to predict the impact of ground clouds formed during Space Shuttle launches on ambient air quality. The diffusion models are based on the Gaussian plume equation for an instantaneous volume source. Cloud growth is estimated on the basis of measurable meteorological parameters: standard deviation of the wind azimuth angle, standard deviation of wind elevation angle, vertical wind-speed shear, vertical wind-direction shear, and depth of the surface mixing layer. Calculations using these models indicate that Space Shuttle launches under a variety of meteorological regimes at Kennedy Space Center and Vandenberg AFB are unlikely to endanger the exposure standards for HCl; similar results have been obtained for CO and Al2O3. However, the possibility that precipitation scavenging of the ground cloud might result in an acidic rain that could damage vegetation has not been investigated.

  16. RE-PERG, a new procedure for electrophysiologic diagnosis of glaucoma that may improve PERG specificity.

    PubMed

    Mavilio, Alberto; Sisto, Dario; Ferreri, Paolo; Cardascia, Nicola; Alessio, Giovanni

    2017-01-01

    A significant variability of the second harmonic (2ndH) phase of steady-state pattern electroretinogram (SS-PERG) in intrasession retest has been recently described in glaucoma patients (GP), which has not been found in healthy subjects. To evaluate the reliability of phase variability in retest (a procedure called RE-PERG or REPERG) in the presence of cataract, which is known to affect standard PERG, we tested this procedure in GP, normal controls (NC), and cataract patients (CP). The procedure was performed on 50 GP, 35 NC, and 27 CP. All subjects were examined with RE-PERG and SS-PERG and also with spectral domain optical coherence tomography and standard automated perimetry. Standard deviation of phase and amplitude value of 2ndH were correlated by means of one-way analysis of variance and Pearson correlation, with the mean deviation and pattern standard deviation assessed by standard automated perimetry and retinal nerve fiber layer and the ganglion cell complex thickness assessed by spectral domain optical coherence tomography. Receiver operating characteristics were calculated in cohort populations with and without cataract. Standard deviation of phase of 2ndH was significantly higher in GP with respect to NC ( P <0.001) and CP ( P <0.001), and it correlated with retinal nerve fiber layer ( r =-0.5, P <0.001) and ganglion cell complex ( r =-0.6, P <0.001) defects in GP. Receiver operating characteristic evaluation showed higher specificity of RE-PERG (86.4%; area under the curve 0.93) with respect to SS-PERG (54.5%; area under the curve 0.68) in CP. RE-PERG may improve the specificity of SS-PERG in clinical practice in the discrimination of GP.

  17. Resistance Training Increases the Variability of Strength Test Scores

    DTIC Science & Technology

    2009-06-08

    standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard

  18. Shapes of strong shock fronts in an inhomogeneous solar wind

    NASA Technical Reports Server (NTRS)

    Heinemann, M. A.; Siscoe, G. L.

    1974-01-01

    The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.

  19. Standard deviation and standard error of the mean.

    PubMed

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  20. Standard deviation and standard error of the mean

    PubMed Central

    In, Junyong; Lee, Sangseok

    2015-01-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923

  1. SU-E-T-272: Direct Verification of a Treatment Planning System Megavoltage Linac Beam Photon Spectra Models, and Analysis of the Effects On Patient Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leheta, D; Shvydka, D; Parsai, E

    2015-06-15

    Purpose: For the photon dose calculation Philips Pinnacle Treatment Planning System (TPS) uses collapsed cone convolution algorithm, which relies on energy spectrum of the beam in computing the scatter component. The spectrum is modeled based on Linac’s standard commissioning data and typically is not independently verified. We explored a methodology of using transmission measurements in combination with regularization data processing to unfold Linac spectra. The measured spectra were compared to those modeled by the TPS, and the effect on patient plans was evaluated. Methods: Transmission measurements were conducted in narrow-beam geometry using a standard Farmer ionization chamber. Two attenuating materialsmore » and two build -up caps, having different atomic numbers, served to enhance discrimination between absorption of low and high-energy portions of the spectra, thus improving the accuracy of the results. The data was analyzed using a regularization technique implemented through spreadsheet-based calculations. Results: The unfolded spectra were found to deviate from the TPS beam models. The effect of such deviations on treatment planning was evaluated for patient plans through dose distribution calculations with either TPS modeled or measured energy spectra. The differences were reviewed through comparison of isodose distributions, and quantified based on maximum dose values for critical structures. While in most cases no drastic differences in the calculated doses were observed, plans with deviations of 4 to 8% in the maximum dose values for critical structures were discovered. The anatomical sites with large scatter contributions are the most vulnerable to inaccuracies in the modeled spectrum. Conclusion: An independent check of the TPS model spectrum is highly desirable and should be included as part of commissioning of a new Linac. The effect is particularly important for dose calculations in high heterogeneity regions. The developed approach makes acquisition of megavoltage Linac beam spectra achievable in a typical radiation oncology clinic.« less

  2. SU-E-T-276: Dose Calculation Accuracy with a Standard Beam Model for Extended SSD Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisling, K; Court, L; Kirsner, S

    2015-06-15

    Purpose: While most photon treatments are delivered near 100cm SSD or less, a subset of patients may benefit from treatment at SSDs greater than 100cm. A proposed rotating chair for upright treatments would enable isocentric treatments at extended SSDs. The purpose of this study was to assess the accuracy of the Pinnacle{sup 3} treatment planning system dose calculation for standard beam geometries delivered at extended SSDs with a beam model commissioned at 100cm SSD. Methods: Dose to a water phantom at 100, 110, and 120cm SSD was calculated with the Pinnacle {sup 3} CC convolve algorithm for 6x beams formore » 5×5, 10×10, 20×20, and 30×30cm{sup 2} field sizes (defined at the water surface for each SSD). PDDs and profiles (depths of 1.5, 12.5, and 22cm) were compared to measurements in water with an ionization chamber. Point-by-point agreement was analyzed, as well as agreement in field size defined by the 50% isodose. Results: The deviations of the calculated PDDs from measurement, analyzed from depth of maximum dose to 23cm, were all within 1.3% for all beam geometries. In particular, the calculated PDDs at 10cm depth were all within 0.7% of measurement. For profiles, the deviations within the central 80% of the field were within 2.2% for all geometries. The field sizes all agreed within 2mm. Conclusion: The agreement of the PDDs and profiles calculated by Pinnacle3 for extended SSD geometries were within the acceptability criteria defined by Van Dyk (±2% for PDDs and ±3% for profiles). The accuracy of the calculation of more complex beam geometries at extended SSDs will be investigated to further assess the feasibility of using a standard beam model commissioned at 100cm SSD in Pinnacle3 for extended SSD treatments.« less

  3. Evidence for higher order QED effects in e+ e- pair production at the BNL Relativistic Heavy Ion Collider.

    PubMed

    Baltz, A J

    2008-02-15

    A new lowest order QED calculation for BNL Relativistic Heavy-Ion Collider e+ e- pair production has been carried out with a phenomenological treatment of the Coulomb dissociation of the heavy-ion nuclei observed in the STAR ZDC triggers. The lowest order QED result for the experimental acceptance is nearly 2 standard deviations larger than the STAR data. A corresponding higher-order QED calculation is consistent with the data.

  4. Results of modern processing of the photographic observations of Uranus and Neptune from Archives of UkrVO

    NASA Astrophysics Data System (ADS)

    Protsyuk, Yu. I.; Kovylianska, O. E.; Protsyuk, S. V.; Yizhakevych, O. M.; Andruk, V. M.; Golovnia, V. V.; Yuldoshev, Q. K.

    2017-02-01

    The bulk of planet observations was obtained in RI MAO and MAO NASU from 1961 to 1994. Plates from AI UAS were also used. Each plate of NAO was scanned 6 times, in other observatories - only once. All images are processed, most of them are identified and the equatorial coordinates of all objects were obtained. Positional accuracy of the reference stars has value of 0.04"-0.30". Standard deviation of the planets' position is in the range 0.10-0.12 pixels, that corresponds, depending on the scale, from 0".08 to 0".26. The comparison of the new topocentric positions of the planets with JPL/HORIZONS ephemeris was made. Calculation of (O-C) values and their standard deviation is obtained.

  5. Solar Activity, Ultraviolet Radiation and Consequences in Birds in Mexico City, 2001- 2002

    NASA Astrophysics Data System (ADS)

    Valdes, M.; Velasco, V.

    2008-12-01

    Anomalous behavior in commercial and pet birds in Mexico City was reported during 2002 by veterinarians at the Universidad Nacional Autonoma de Mexico. This was attributed to variations in the surrounding luminosity. The solar components, direct, diffuse, global, ultraviolet band A and B, as well as some meteorological parameters, temperature, relative humidity, and precipitation, were then analyzed at the Solar Radiation Laboratory. Although the total annual radiance of the previously mentioned radiation components did not show important changes, ultraviolet Band-B solar radiation did vary significantly. During 2001 the total annual irradiance , 61.05 Hjcm² to 58.32 Hjcm², was 1.6 standard deviations lower than one year later, in 2002 and increased above the mean total annual irradiance, to 65.75 Hjcm², 2.04 standard deviations, giving a total of 3.73 standard deviations for 2001-2002. Since these differences did not show up clearly in the other solar radiation components, daily extra-atmosphere irradiance was analyzed and used to calculate the total annual extra-atmosphere irradiance, which showed a descent for 2001. Our conclusions imply that Ultraviolet Band-B solar radiation is representative of solar activity and has an important impact on commercial activity related with birds.

  6. 7 CFR 400.204 - Notification of deviation from standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...

  7. Teacher Candidates' Attitudes towards the Teaching Profession in Turkey

    ERIC Educational Resources Information Center

    Tok, Turkay Nuri

    2012-01-01

    This study examined the attitudes of teacher candidates in Turkey towards the teaching profession. Descriptive surveys were used and the research data was obtained from Pamukkale University Classroom Teaching students. During data analysis, the arithmetic means and standard deviations of the groups were calculated and a t-test and One-Way ANOVA…

  8. Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes

    NASA Astrophysics Data System (ADS)

    Białek, Małgorzata

    2006-03-01

    Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.

  9. Investigations of internal noise levels for different target sizes, contrasts, and noise structures

    NASA Astrophysics Data System (ADS)

    Han, Minah; Choi, Shinkook; Baek, Jongduk

    2014-03-01

    To describe internal noise levels for different target sizes, contrasts, and noise structures, Gaussian targets with four different sizes (i.e., standard deviation of 2,4,6 and 8) and three different noise structures(i.e., white, low-pass, and highpass) were generated. The generated noise images were scaled to have standard deviation of 0.15. For each noise type, target contrasts were adjusted to have the same detectability based on NPW, and the detectability of CHO was calculated accordingly. For human observer study, 3 trained observers performed 2AFC detection tasks, and correction rate, Pc, was calculated for each task. By adding proper internal noise level to numerical observer (i.e., NPW and CHO), detectability of human observer was matched with that of numerical observers. Even though target contrasts were adjusted to have the same detectability of NPW observer, detectability of human observer decreases as the target size increases. The internal noise level varies for different target sizes, contrasts, and noise structures, demonstrating different internal noise levels should be considered in numerical observer to predict the detection performance of human observer.

  10. Stability Analysis of Receiver ISB for BDS/GPS

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Hao, J. M.; Tian, Y. G.; Yu, H. L.; Zhou, Y. L.

    2017-07-01

    Stability analysis of receiver ISB (Inter-System Bias) is essential for understanding the feature of ISB as well as the ISB modeling and prediction. In order to analyze the long-term stability of ISB, the data from MGEX (Multi-GNSS Experiment) covering 3 weeks, which are from 2014, 2015 and 2016 respectively, are processed with the precise satellite clock and orbit products provided by Wuhan University and GeoForschungsZentrum (GFZ). Using the ISB calculated by BDS (BeiDou Navigation Satellite System)/GPS (Global Positioning System) combined PPP (Precise Point Positioning), the daily stability and weekly stability of ISB are investigated. The experimental results show that the diurnal variation of ISB is stable, and the average of daily standard deviation is about 0.5 ns. The weekly averages and standard deviations of ISB vary greatly in different years. The weekly averages of ISB are relevant to receiver types. There is a system bias between ISB calculated from the precise products provided by Wuhan University and GFZ. In addition, the system bias of the weekly average ISB of different stations is consistent with each other.

  11. Quantifying gait deviations in individuals with rheumatoid arthritis using the Gait Deviation Index.

    PubMed

    Esbjörnsson, A-C; Rozumalski, A; Iversen, M D; Schwartz, M H; Wretenberg, P; Broström, E W

    2014-01-01

    In this study we evaluated the usability of the Gait Deviation Index (GDI), an index that summarizes the amount of deviation in movement from a standard norm, in adults with rheumatoid arthritis (RA). The aims of the study were to evaluate the ability of the GDI to identify gait deviations, assess inter-trial repeatability, and examine the relationship between the GDI and walking speed, physical disability, and pain. Sixty-three adults with RA and 59 adults with typical gait patterns were included in this retrospective case-control study. Following a three-dimensional gait analysis (3DGA), representative gait cycles were selected and GDI scores calculated. To evaluate the effect of walking speed, GDI scores were calculated using both a free-speed and a speed-matched reference set. Physical disability was assessed using the Health Assessment Questionnaire (HAQ) and subjects rated their pain during walking. Adults with RA had significantly increased gait deviations compared to healthy individuals, as shown by lower GDI scores [87.9 (SD = 8.7) vs. 99.4 (SD = 8.3), p < 0.001]. This difference was also seen when adjusting for walking speed [91.7 (SD = 9.0) vs. 99.9 (SD = 8.6), p < 0.001]. It was estimated that a change of ≥ 5 GDI units was required to account for natural variation in gait. There was no evident relationship between GDI and low/high RA-related physical disability and pain. The GDI seems to useful for identifying and summarizing gait deviations in individuals with RA. Thus, we consider that the GDI provides an overall measure of gait deviation that may reflect lower extremity pathology and may help clinicians to understand the impact of RA on gait dynamics.

  12. Effect of Temperature on the Physico-Chemical Properties of a Room Temperature Ionic Liquid (1-Methyl-3-pentylimidazolium Hexafluorophosphate) with Polyethylene Glycol Oligomer

    PubMed Central

    Wu, Tzi-Yi; Chen, Bor-Kuan; Hao, Lin; Peng, Yu-Chun; Sun, I-Wen

    2011-01-01

    A systematic study of the effect of composition on the thermo-physical properties of the binary mixtures of 1-methyl-3-pentyl imidazolium hexafluorophosphate [MPI][PF6] with poly(ethylene glycol) (PEG) [Mw = 400] is presented. The excess molar volume, refractive index deviation, viscosity deviation, and surface tension deviation values were calculated from these experimental density, ρ, refractive index, n, viscosity, η, and surface tension, γ, over the whole concentration range, respectively. The excess molar volumes are negative and continue to become increasingly negative with increasing temperature; whereas the viscosity and surface tension deviation are negative and become less negative with increasing temperature. The surface thermodynamic functions, such as surface entropy, enthalpy, as well as standard molar entropy, Parachor, and molar enthalpy of vaporization for pure ionic liquid, have been derived from the temperature dependence of the surface tension values. PMID:21731460

  13. The Standard Deviation of Launch Vehicle Environments

    NASA Technical Reports Server (NTRS)

    Yunis, Isam

    2005-01-01

    Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.

  14. High precision UTDR measurements by sonic velocity compensation with reference transducer.

    PubMed

    Stade, Sam; Kallioinen, Mari; Mänttäri, Mika; Tuuva, Tuure

    2014-07-02

    An ultrasonic sensor design with sonic velocity compensation is developed to improve the accuracy of distance measurement in membrane modules. High accuracy real-time distance measurements are needed in membrane fouling and compaction studies. The benefits of the sonic velocity compensation with a reference transducer are compared to the sonic velocity calculated with the measured temperature and pressure using the model by Belogol'skii, Sekoyan et al. In the experiments the temperature was changed from 25 to 60 °C at pressures of 0.1, 0.3 and 0.5 MPa. The set measurement distance was 17.8 mm. Distance measurements with sonic velocity compensation were over ten times more accurate than the ones calculated based on the model. Using the reference transducer measured sonic velocity, the standard deviations for the distance measurements varied from 0.6 to 2.0 µm, while using the calculated sonic velocity the standard deviations were 21-39 µm. In industrial liquors, not only the temperature and the pressure, which were studied in this paper, but also the properties of the filtered solution, such as solute concentration, density, viscosity, etc., may vary greatly, leading to inaccuracy in the use of the Belogol'skii, Sekoyan et al. model. Therefore, calibration of the sonic velocity with reference transducers is needed for accurate distance measurements.

  15. Evaluating Silent Reading Performance with an Eye Tracking System in Patients with Glaucoma

    PubMed Central

    Murata, Noriaki; Fukuchi, Takeo

    2017-01-01

    Objective To investigate the relationship between silent reading performance and visual field defects in patients with glaucoma using an eye tracking system. Methods Fifty glaucoma patients (Group G; mean age, 52.2 years, standard deviation: 11.4 years) and 20 normal controls (Group N; mean age, 46.9 years; standard deviation: 17.2 years) were included in the study. All participants in Group G had early to advanced glaucomatous visual field defects but better than 20/20 visual acuity in both eyes. Participants silently read Japanese articles written horizontally while the eye tracking system monitored and calculated reading duration per 100 characters, number of fixations per 100 characters, and mean fixation duration, which were compared with mean deviation and visual field index values from Humphrey visual field testing (24–2 and 10–2 Swedish interactive threshold algorithm standard) of the right versus left eye and the better versus worse eye. Results There was a statistically significant difference between Groups G and N in mean fixation duration (G, 233.4 msec; N, 215.7 msec; P = 0.010). Within Group G, significant correlations were observed between reading duration and 24–2 right mean deviation (rs = -0.280, P = 0.049), 24–2 right visual field index (rs = -0.306, P = 0.030), 24–2 worse visual field index (rs = -0.304, P = 0.032), and 10–2 worse mean deviation (rs = -0.326, P = 0.025). Significant correlations were observed between mean fixation duration and 10–2 left mean deviation (rs = -0.294, P = 0.045) and 10–2 worse mean deviation (rs = -0.306, P = 0.037), respectively. Conclusions The severity of visual field defects may influence some aspects of reading performance. At least concerning silent reading, the visual field of the worse eye is an essential element of smoothness of reading. PMID:28095478

  16. Putative golden proportions as predictors of facial esthetics in adolescents.

    PubMed

    Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C

    2008-10-01

    In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.

  17. 7 CFR 400.174 - Notification of deviation from financial standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...

  18. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  19. QUANTITATIVE PCR ANALYSIS OF MOLDS IN THE DUST FROM HOMES OF ASTHMATIC CHILDREN IN NORTH CAROLINA

    EPA Science Inventory

    The vacuum bag (VB) dust was analyzed by mold specific quantitative PCR. These results were compared to the analysis survey calculated for each of the homes. The mean and standard deviation (SD) of the ERMI values in the homes of the NC asthmatic children was 16.4 (6.77), compa...

  20. Calibration of helical tomotherapy machine using EPR/alanine dosimetry.

    PubMed

    Perichon, Nicolas; Garcia, Tristan; François, Pascal; Lourenço, Valérie; Lesven, Caroline; Bordy, Jean-Marc

    2011-03-01

    Current codes of practice for clinical reference dosimetry of high-energy photon beams in conventional radiotherapy recommend using a 10 x 10 cm2 square field, with the detector at a reference depth of 10 cm in water and 100 cm source to surface distance (SSD) (AAPM TG-51) or 100 cm source-to-axis distance (SAD) (IAEA TRS-398). However, the maximum field size of a helical tomotherapy (HT) machine is 40 x 5 cm2 defined at 85 cm SAD. These nonstandard conditions prevent a direct implementation of these protocols. The purpose of this study is twofold: To check the absorbed dose in water and dose rate calibration of a tomotherapy unit as well as the accuracy of the tomotherapy treatment planning system (TPS) calculations for a specific test case. Both topics are based on the use of electron paramagnetic resonance (EPR) using alanine as transfer dosimeter between the Laboratoire National Henri Becquerel (LNHB) 60Co-gamma-ray reference beam and the Institut Curie's HT beam. Irradiations performed in the LNHB reference 60Co-gamma-ray beam allowed setting up the calibration method, which was then implemented and tested at the LNHB 6 MV linac x-ray beam, resulting in a deviation of 1.6% (at a 1% standard uncertainty) relative to the reference value determined with the standard IAEA TRS-398 protocol. HT beam dose rate estimation shows a difference of 2% with the value stated by the manufacturer at a 2% standard uncertainty. A 4% deviation between measured dose and the calculation from the tomotherapy TPS was found. The latter was originated by an inadequate representation of the phantom CT-scan values and, consequently, mass densities within the phantom. This difference has been explained by the mass density values given by the CT-scan and used by the TPS which were not the true ones. Once corrected using Monte Carlo N-Particle simulations to validate the accuracy of this process, the difference between corrected TPS calculations and alanine measured dose values was then found to be around 2% (with 2% standard uncertainty on TPS doses and 1.5% standard uncertainty on EPR measurements). Beam dose rate estimation results were found to be in good agreement with the reference value given by the manufacturer at 2% standard uncertainty. Moreover, the dose determination method was set up with a deviation around 2% (at a 2% standard uncertainty).

  1. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  2. Test-retest reliability of 3D ultrasound measurements of the thoracic spine.

    PubMed

    Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian

    2012-05-01

    To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  3. Impact of Image Noise on Gamma Index Calculation

    NASA Astrophysics Data System (ADS)

    Chen, M.; Mo, X.; Parnell, D.; Olivera, G.; Galmarini, D.; Lu, W.

    2014-03-01

    Purpose: The Gamma Index defines an asymmetric metric between the evaluated image and the reference image. It provides a quantitative comparison that can be used to indicate sample-wised pass/fail on the agreement of the two images. The Gamma passing/failing rate has become an important clinical evaluation tool. However, the presence of noise in the evaluated and/or reference images may change the Gamma Index, hence the passing/failing rate, and further, clinical decisions. In this work, we systematically studied the impact of the image noise on the Gamma Index calculation. Methods: We used both analytic formulation and numerical calculations in our study. The numerical calculations included simulations and clinical images. Three different noise scenarios were studied in simulations: noise in reference images only, in evaluated images only, and in both. Both white and spatially correlated noises of various magnitudes were simulated. For clinical images of various noise levels, the Gamma Index of measurement against calculation, calculation against measurement, and measurement against measurement, were evaluated. Results: Numerical calculations for both the simulation and clinical data agreed with the analytic formulations, and the clinical data agreed with the simulations. For the Gamma Index of measurement against calculation, its distribution has an increased mean and an increased standard deviation as the noise increases. On the contrary, for the Gamma index of calculation against measurement, its distribution has a decreased mean and stabilized standard deviation as the noise increases. White noise has greater impact on the Gamma Index than spatially correlated noise. Conclusions: The noise has significant impact on the Gamma Index calculation and the impact is asymmetric. The Gamma Index should be reported along with the noise levels in both reference and evaluated images. Reporting of the Gamma Index with switched roles of the images as reference and evaluated images or some composite metrics would be a good practice.

  4. 1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...

  5. Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars

    NASA Astrophysics Data System (ADS)

    Ruml, Mirjana; Vuković, Ana; Milatović, Dragan

    2010-07-01

    The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.

  6. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  7. Lead-lag relationships between stock and market risk within linear response theory

    NASA Astrophysics Data System (ADS)

    Borysov, Stanislav; Balatsky, Alexander

    2015-03-01

    We study historical correlations and lead-lag relationships between individual stock risks (standard deviation of daily stock returns) and market risk (standard deviation of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over stocks, using historical stock prices from the Standard & Poor's 500 index for 1994-2013. The observed historical dynamics suggests that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when individual stock risks affect market risk and vice versa. This work was supported by VR 621-2012-2983.

  8. Comparison of lysimeter based and calculated ASCE reference evapotranspiration in a subhumid climate

    NASA Astrophysics Data System (ADS)

    Nolz, Reinhard; Cepuder, Peter; Eitzinger, Josef

    2016-04-01

    The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration (ET ref) and subsequently plant water requirements. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on environmental and weather conditions. Therefore, it seems generally advisable to evaluate the model under local environmental conditions. In this study, reference evapotranspiration was determined at a subhumid site in northeastern Austria from 2005 to 2010 using a large weighing lysimeter (ET lys). The measured data were compared with ET ref calculations. Daily values differed slightly during a year, at which ET ref was generally overestimated at small values, whereas it was rather underestimated when ET was large, which is supported also by other studies. In our case, advection of sensible heat proved to have an impact, but it could not explain the differences exclusively. Obviously, there were also other influences, such as seasonal varying surface resistance or albedo. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret ET ref data in the region and in similar environments and improve knowledge on the dynamics of influencing factors causing deviations.

  9. Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide

    DTIC Science & Technology

    1981-02-01

    SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway

  10. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  11. [Method for the quality assessment of data collection processes in epidemiological studies].

    PubMed

    Schöne, G; Damerow, S; Hölling, H; Houben, R; Gabrys, L

    2017-10-01

    For a quantitative evaluation of primary data collection processes in epidemiological surveys based on accompaniments and observations (in the field), there is no description of test criteria and methodologies in relevant literature and thus no known application in practice. Therefore, methods need to be developed and existing procedures adapted. The aim was to identify quality-relevant developments within quality dimensions by means of inspection points (quality indicators) during the process of data collection. As a result we seek to implement and establish a methodology for the assessment of overall survey quality supplementary to standardized data analyses. Monitors detect deviations from standard primary data collection during site visits by applying standardized checklists. Quantitative results - overall and for each dimension - are obtained by numerical calculation of quality indicators. Score results are categorized and color coded. This visual prioritization indicates necessity for intervention. The results obtained give clues regarding the current quality of data collection. This allows for the identification of such sections where interventions for quality improvement are needed. In addition, process quality development can be shown over time on an intercomparable basis. This methodology for the evaluation of data collection quality can identify deviations from norms, focalize quality analyses and help trace causes for significant deviations.

  12. Automated lung volumetry from routine thoracic CT scans: how reliable is the result?

    PubMed

    Haas, Matthias; Hamm, Bernd; Niehues, Stefan M

    2014-05-01

    Today, lung volumes can be easily calculated from chest computed tomography (CT) scans. Modern postprocessing workstations allow automated volume measurement of data sets acquired. However, there are challenges in the use of lung volume as an indicator of pulmonary disease when it is obtained from routine CT. Intra-individual variation and methodologic aspects have to be considered. Our goal was to assess the reliability of volumetric measurements in routine CT lung scans. Forty adult cancer patients whose lungs were unaffected by the disease underwent routine chest CT scans in 3-month intervals, resulting in a total number of 302 chest CT scans. Lung volume was calculated by automatic volumetry software. On average of 7.2 CT scans were successfully evaluable per patient (range 2-15). Intra-individual changes were assessed. In the set of patients investigated, lung volume was approximately normally distributed, with a mean of 5283 cm(3) (standard deviation = 947 cm(3), skewness = -0.34, and curtosis = 0.16). Between different scans in one and the same patient the median intra-individual standard deviation in lung volume was 853 cm(3) (16% of the mean lung volume). Automatic lung segmentation of routine chest CT scans allows a technically stable estimation of lung volume. However, substantial intra-individual variations have to be considered. A median intra-individual deviation of 16% in lung volume between different routine scans was found. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  13. Measurement of the doubly-polarized He 3 → ( γ → , n ) p p reaction at 16.5 MeV and its implications for the GDH sum rule

    DOE PAGES

    Laskaris, G.; Yan, X.; Mueller, J. M.; ...

    2015-10-01

    We report new measurements of the double-polarized photodisintegration of 3He at an incident photon energy of 16.5 MeV, carried out at the High Intensity γ-ray Source (HIγS) facility located at Triangle Universities Nuclear Laboratory (TUNL). The spin-dependent double-differential cross sections and the contribution from the three-body channel to the Gerasimov–Drell–Hearn (GDH) integrand were extracted and compared with the state-of-the-art three-body calculations. The calculations, which include the Coulomb interaction and are in good agreement with the results of previous measurements at 12.8 and 14.7 MeV, deviate from the new cross section results at 16.5 MeV. Lastly, the GDH integrand was foundmore » to be about one standard deviation larger than the maximum value predicted by the theories.« less

  14. A Visual Model for the Variance and Standard Deviation

    ERIC Educational Resources Information Center

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  15. A Mixed QM/MM Scoring Function to Predict Protein-Ligand Binding Affinity

    PubMed Central

    Hayik, Seth A.; Dunbrack, Roland; Merz, Kenneth M.

    2010-01-01

    Computational methods for predicting protein-ligand binding free energy continue to be popular as a potential cost-cutting method in the drug discovery process. However, accurate predictions are often difficult to make as estimates must be made for certain electronic and entropic terms in conventional force field based scoring functions. Mixed quantum mechanics/molecular mechanics (QM/MM) methods allow electronic effects for a small region of the protein to be calculated, treating the remaining atoms as a fixed charge background for the active site. Such a semi-empirical QM/MM scoring function has been implemented in AMBER using DivCon and tested on a set of 23 metalloprotein-ligand complexes, where QM/MM methods provide a particular advantage in the modeling of the metal ion. The binding affinity of this set of proteins can be calculated with an R2 of 0.64 and a standard deviation of 1.88 kcal/mol without fitting and 0.71 and a standard deviation of 1.69 kcal/mol with fitted weighting of the individual scoring terms. In this study we explore using various methods to calculate terms in the binding free energy equation, including entropy estimates and minimization standards. From these studies we found that using the rotational bond estimate to ligand entropy results in a reasonable R2 of 0.63 without fitting. We also found that using the ESCF energy of the proteins without minimization resulted in an R2 of 0.57, when using the rotatable bond entropy estimate. PMID:21221417

  16. Basic life support: evaluation of learning using simulation and immediate feedback devices1.

    PubMed

    Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi

    2017-10-30

    to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.

  17. Opposite associations of age-dependent insulin-like growth factor-I standard deviation scores with nutritional state in normal weight and obese subjects.

    PubMed

    Schneider, Harald Jörn; Saller, Bernhard; Klotsche, Jens; März, Winfried; Erwa, Wolfgang; Wittchen, Hans-Ullrich; Stalla, Günter Karl

    2006-05-01

    Insulin-like growth factor-I (IGF-I) has been suggested to be a prognostic marker for the development of cancer and, more recently, cardiovascular disease. These diseases are closely linked to obesity, but reports of the association of IGF-I with measures of obesity are divergent. In this study, we assessed the association of age-dependent IGF-I standard deviation scores with body mass index (BMI) and intra-abdominal fat accumulation in a large population. A cross-sectional, epidemiological study. IGF-I levels were measured with an automated chemiluminescence assay system in 6282 patients from the DETECT study. Weight, height, and waist and hip circumference were measured according to the written instructions. Standard deviation scores (SDS), correcting IGF-I levels for age, were calculated and were used for further analyses. An inverse U-shaped association of IGF-I SDS with BMI, waist circumference, and the ratio of waist circumference to height was found. BMI was positively associated with IGF-I SDS in normal weight subjects, and negatively associated in obese subjects. The highest mean IGF-I SDS were seen at a BMI of 22.5-25 kg/m2 in men (+0.08), and at a BMI of 27.5-30 kg/m2 in women (+0.21). Multiple linear regression models, controlling for different diseases, medications and risk conditions, revealed a significant negative association of BMI with IGF-I SDS. BMI contributed most to the additional explained variance to the other health conditions. IGF-I standard deviation scores are decreased in obesity and underweight subjects. These interactions should be taken into account when analyzing the association of IGF-I with diseases and risk conditions.

  18. File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering

    DTIC Science & Technology

    2013-03-21

    33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change

  19. Uncertainties of Mayak urine data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less

  20. Objective image characterization of a spectral CT scanner with dual-layer detector

    NASA Astrophysics Data System (ADS)

    Ozguner, Orhan; Dhanantwari, Amar; Halliburton, Sandra; Wen, Gezheng; Utrup, Steven; Jordan, David

    2018-01-01

    This work evaluated the performance of a detector-based spectral CT system by obtaining objective reference data, evaluating attenuation response of iodine and accuracy of iodine quantification, and comparing conventional CT and virtual monoenergetic images in three common phantoms. Scanning was performed using the hospital’s clinical adult body protocol. Modulation transfer function (MTF) was calculated for a tungsten wire and visual line pair targets were evaluated. Image noise power spectrum (NPS) and pixel standard deviation were calculated. MTF for monoenergetic images agreed with conventional images within 0.05 lp cm-1. NPS curves indicated that noise texture of 70 keV monoenergetic images is similar to conventional images. Standard deviation measurements showed monoenergetic images have lower noise except at 40 keV. Mean CT number and CNR agreed with conventional images at 75 keV. Measured iodine concentration agreed with true concentration within 6% for inserts at the center of the phantom. Performance of monoenergetic images at detector based spectral CT is the same as, or better than, that of conventional images. Spectral acquisition and reconstruction with a detector based platform represents the physical behaviour of iodine as expected and accurately quantifies the material concentration.

  1. Real-time in vivo rectal wall dosimetry using plastic scintillation detectors for patients with prostate cancer

    PubMed Central

    Wootton, Landon; Kudchadker, Rajat; Lee, Andrew; Beddar, Sam

    2014-01-01

    We designed and constructed an in vivo dosimetry system using plastic scintillation detectors (PSDs) to monitor dose to the rectal wall in patients undergoing intensity-modulated radiation therapy for prostate cancer. Five patients were enrolled in an Institutional Review Board–approved protocol for twice weekly in vivo dose monitoring with our system, resulting in a total of 142 in vivo dose measurements. PSDs were attached to the surface of endorectal balloons used for prostate immobilization to place the PSDs in contact with the rectal wall. Absorbed dose was measured in real time and the total measured dose was compared with the dose calculated by the treatment planning system on the daily CT image dataset. The mean difference between measured and calculated doses for the entire patient population was −0.4% (standard deviation 2.8%). The mean difference between daily measured and calculated doses for each patient ranged from −3.3% to 3.3% (standard deviation ranged from 5.6% to 7.1% for 4 patients and was 14.0% for the last, for whom optimal positioning of the detector was difficult owing to the patient’s large size). Patients tolerated the detectors well and the treatment workflow was not compromised. Overall, PSDs performed well as in vivo dosimeters, providing excellent accuracy, real-time measurement, and reusability. PMID:24434775

  2. New reference charts for testicular volume in Dutch children and adolescents allow the calculation of standard deviation scores.

    PubMed

    Joustra, Sjoerd D; van der Plas, Evelyn M; Goede, Joery; Oostdijk, Wilma; Delemarre-van de Waal, Henriette A; Hack, Wilfried W M; van Buuren, Stef; Wit, Jan M

    2015-06-01

    Accurate calculations of testicular volume standard deviation (SD) scores are not currently available. We constructed LMS-smoothed age-reference charts for testicular volume in healthy boys. The LMS method was used to calculate reference data, based on testicular volumes from ultrasonography and Prader orchidometer of 769 healthy Dutch boys aged 6 months to 19 years. We also explored the association between testicular growth and pubic hair development, and data were compared to orchidometric testicular volumes from the 1997 Dutch nationwide growth study. The LMS-smoothed reference charts showed that no revision of the definition of normal onset of male puberty - from nine to 14 years of age - was warranted. In healthy boys, the pubic hair stage SD scores corresponded with testicular volume SD scores (r = 0.394). However, testes were relatively small for pubic hair stage in Klinefelter's syndrome and relatively large in immunoglobulin superfamily member 1 deficiency syndrome. The age-corrected SD scores for testicular volume will aid in the diagnosis and follow-up of abnormalities in the timing and progression of male puberty and in research evaluations. The SD scores can be compared with pubic hair SD scores to identify discrepancies between cell functions that result in relative microorchidism or macroorchidism. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  3. Real-time in vivo rectal wall dosimetry using plastic scintillation detectors for patients with prostate cancer

    NASA Astrophysics Data System (ADS)

    Wootton, Landon; Kudchadker, Rajat; Lee, Andrew; Beddar, Sam

    2014-02-01

    We designed and constructed an in vivo dosimetry system using plastic scintillation detectors (PSDs) to monitor dose to the rectal wall in patients undergoing intensity-modulated radiation therapy for prostate cancer. Five patients were enrolled in an Institutional Review Board-approved protocol for twice weekly in vivo dose monitoring with our system, resulting in a total of 142 in vivo dose measurements. PSDs were attached to the surface of endorectal balloons used for prostate immobilization to place the PSDs in contact with the rectal wall. Absorbed dose was measured in real time and the total measured dose was compared with the dose calculated by the treatment planning system on the daily computed tomographic image dataset. The mean difference between measured and calculated doses for the entire patient population was -0.4% (standard deviation 2.8%). The mean difference between daily measured and calculated doses for each patient ranged from -3.3% to 3.3% (standard deviation ranged from 5.6% to 7.1% for four patients and was 14.0% for the last, for whom optimal positioning of the detector was difficult owing to the patient's large size). Patients tolerated the detectors well and the treatment workflow was not compromised. Overall, PSDs performed well as in vivo dosimeters, providing excellent accuracy, real-time measurement and reusability.

  4. The nuclear electric quadrupole moment of copper.

    PubMed

    Santiago, Régis Tadeu; Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade

    2014-06-21

    The nuclear electric quadrupole moment (NQM) of the (63)Cu nucleus was determined from an indirect approach by combining accurate experimental nuclear quadrupole coupling constants (NQCCs) with relativistic Dirac-Coulomb coupled cluster calculations of the electric field gradient (EFG). The data obtained at the highest level of calculation, DC-CCSD-T, from 14 linear molecules containing the copper atom give rise to an indicated NQM of -198(10) mbarn. Such result slightly deviates from the previously accepted standard value given by the muonic method, -220(15) mbarn, although the error bars are superimposed.

  5. WASP (Write a Scientific Paper) using Excel - 7: The t-distribution.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics after data collection provides researchers with an overview of the shape and nature of their datasets, along with basic descriptors, and may help identify true or incorrect outlier values. This exercise should always precede inferential statistics, when possible. This paper provides some pointers for doing so in Microsoft Excel, both statically and dynamically, with Excel's functions, including the calculation of standard deviation and variance and the relevance of the t-distribution. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.

    1987-01-01

    Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.

  7. Routine sampling and the control of Legionella spp. in cooling tower water systems.

    PubMed

    Bentham, R H

    2000-10-01

    Cooling water samples from 31 cooling tower systems were cultured for Legionella over a 16-week summer period. The selected systems were known to be colonized by Legionella. Mean Legionella counts and standard deviations were calculated and time series correlograms prepared for each system. The standard deviations of Legionella counts in all the systems were very large, indicating great variability in the systems over the time period. Time series analyses demonstrated that in the majority of cases there was no significant relationship between the Legionella counts in the cooling tower at time of collection and the culture result once it was available. In the majority of systems (25/28), culture results from Legionella samples taken from the same systems 2 weeks apart were not statistically related. The data suggest that determinations of health risks from cooling towers cannot be reliably based upon single or infrequent Legionella tests.

  8. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections.

    PubMed

    Zhang, You; Yin, Fang-Fang; Ren, Lei

    2015-08-01

    Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.

  9. allantools: Allan deviation calculation

    NASA Astrophysics Data System (ADS)

    Wallin, Anders E. E.; Price, Danny C.; Carson, Cantwell G.; Meynadier, Frédéric

    2018-04-01

    allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  11. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  12. Healthcare Financial Management Association, Principles and Practices Board. Statement No. 16. Classifying, valuing, and analyzing accounts receivable related to patient services.

    PubMed

    1993-05-01

    This Principles and Practices Board project was undertaken in response to the frequent requests from HFMA members for a standard calculation of "days of revenue in receivables." The board's work on this project indicated that every element of the calculation required standards, which is what this statement provides. Since there have been few standards for accounts receivable related to patient services, the industry follows a variety of practices, which often differ from each other. This statement is intended to provide a framework for enhanced external comparison of accounts receivable related to patient services, and thereby improve management information related to this very important asset. Thus, the standards described in this statement represent long-term goals for gradual transition of recordkeeping practices and not a sudden or revolutionary change. The standards described in this statement will provide the necessary framework for the most meaningful external comparisons. Furthermore, management's understanding of deviations from these standards will immediately assist in analysis of differences in data between providers.

  13. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  14. Detection and quantification system for monitoring instruments

    DOEpatents

    Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA

    2008-08-12

    A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.

  15. Experimental Validation of Thermal Retinal Models of Damage from Laser Radiation

    DTIC Science & Technology

    1979-08-01

    for measuring relative intensity profile with a thermocouple or fiber-optic sensor .............................................. 72 B-2 Calculated...relative intensity profiles meas- ured by 5- and 10-pm-radius sensors of a Gaussian beam, with standard deviation of 10 Pm...the Air Force de - veloped a model for the mathematical prediction of thermal ef- fects of laser radiation on the eye (8). Given the characteris- tics

  16. Alpha decay studies on Po isotopes using different versions of nuclear potentials

    NASA Astrophysics Data System (ADS)

    Santhosh, K. P.; Sukumaran, Indu

    2017-12-01

    The alpha decays from 186-224Po isotopes have been studied using 25 different versions of nuclear potentials so as to select a suitable nuclear potential for alpha decay studies. The computed standard deviation of the calculated half-lives in comparison with the experimental data suggested that proximity 2003-I is the apt form of nuclear potential for alpha decay studies as it possesses the least standard deviation, σ =0.620 . Among the different proximity potentials, proximity 1966 ( σ =0.630 and proximity 1977 ( σ =0.636 , are also found to work well in alpha decay studies with low deviation. Among other versions of nuclear potentials (other than proximity potentials), Bass 1980 is suggested to be a significant form of nuclear potential because of its good predictive power. However, while the other forms of potentials are able to reproduce the experimental data to some extent, these potentials cannot be considered as apposite potentials for alpha decay studies in their present form. Since the experimental correlation of the models is noticed to be satisfying, the alpha decay half-lives of certain Po isotopes that are not detected experimentally yet have been predicted.

  17. Phased-array vector velocity estimation using transverse oscillations.

    PubMed

    Pihl, Michael J; Marcher, Jonne; Jensen, Jorgen A

    2012-12-01

    A method for estimating the 2-D vector velocity of blood using a phased-array transducer is presented. The approach is based on the transverse oscillation (TO) method. The purposes of this work are to expand the TO method to a phased-array geometry and to broaden the potential clinical applicability of the method. A phased-array transducer has a smaller footprint and a larger field of view than a linear array, and is therefore more suited for, e.g., cardiac imaging. The method relies on suitable TO fields, and a beamforming strategy employing diverging TO beams is proposed. The implementation of the TO method using a phased-array transducer for vector velocity estimation is evaluated through simulation and flow-rig measurements are acquired using an experimental scanner. The vast number of calculations needed to perform flow simulations makes the optimization of the TO fields a cumbersome process. Therefore, three performance metrics are proposed. They are calculated based on the complex TO spectrum of the combined TO fields. It is hypothesized that the performance metrics are related to the performance of the velocity estimates. The simulations show that the squared correlation values range from 0.79 to 0.92, indicating a correlation between the performance metrics of the TO spectrum and the velocity estimates. Because these performance metrics are much more readily computed, the TO fields can be optimized faster for improved velocity estimation of both simulations and measurements. For simulations of a parabolic flow at a depth of 10 cm, a relative (to the peak velocity) bias and standard deviation of 4% and 8%, respectively, are obtained. Overall, the simulations show that the TO method implemented on a phased-array transducer is robust with relative standard deviations around 10% in most cases. The flow-rig measurements show similar results. At a depth of 9.5 cm using 32 emissions per estimate, the relative standard deviation is 9% and the relative bias is -9%. At the center of the vessel, the velocity magnitude is estimated to be 0.25 ± 0.023 m/s, compared with an expected peak velocity magnitude of 0.25 m/s, and the beam-to-flow angle is calculated to be 89.3° ± 0.77°, compared with an expected angle value between 89° and 90°. For steering angles up to ±20° degrees, the relative standard deviation is less than 20%. The results also show that a 64-element transducer implementation is feasible, but with a poorer performance compared with a 128-element transducer. The simulation and experimental results demonstrate that the TO method is suitable for use in conjunction with a phased-array transducer, and that 2-D vector velocity estimation is possible down to a depth of 15 cm.

  18. Investigating the generalisation of an atlas-based synthetic-CT algorithm to another centre and MR scanner for prostate MR-only radiotherapy

    NASA Astrophysics Data System (ADS)

    Wyatt, Jonathan J.; Dowling, Jason A.; Kelly, Charles G.; McKenna, Jill; Johnstone, Emily; Speight, Richard; Henry, Ann; Greer, Peter B.; McCallum, Hazel M.

    2017-12-01

    There is increasing interest in MR-only radiotherapy planning since it provides superb soft-tissue contrast without the registration uncertainties inherent in a CT-MR registration. However, MR images cannot readily provide the electron density information necessary for radiotherapy dose calculation. An algorithm which generates synthetic CTs for dose calculations from MR images of the prostate using an atlas of 3 T MR images has been previously reported by two of the authors. This paper aimed to evaluate this algorithm using MR data acquired at a different field strength and a different centre to the algorithm atlas. Twenty-one prostate patients received planning 1.5 T MR and CT scans with routine immobilisation devices on a flat-top couch set-up using external lasers. The MR receive coils were supported by a coil bridge. Synthetic CTs were generated from the planning MR images with (sCT1V ) and without (sCT) a one voxel body contour expansion included in the algorithm. This was to test whether this expansion was required for 1.5 T images. Both synthetic CTs were rigidly registered to the planning CT (pCT). A 6 MV volumetric modulated arc therapy plan was created on the pCT and recalculated on the sCT and sCT1V . The synthetic CTs’ dose distributions were compared to the dose distribution calculated on the pCT. The percentage dose difference at isocentre without the body contour expansion (sCT-pCT) was Δ D_sCT=(0.9 +/- 0.8) % and with (sCT1V -pCT) was Δ D_sCT1V=(-0.7 +/- 0.7) % (mean  ±  one standard deviation). The sCT1V result was within one standard deviation of zero and agreed with the result reported previously using 3 T MR data. The sCT dose difference only agreed within two standard deviations. The mean  ±  one standard deviation gamma pass rate was Γ_sCT = 96.1 +/- 2.9 % for the sCT and Γ_sCT1V = 98.8 +/- 0.5 % for the sCT1V (with 2% global dose difference and 2~mm distance to agreement gamma criteria). The one voxel body contour expansion improves the synthetic CT accuracy for MR images acquired at 1.5 T but requires the MR voxel size to be similar to the atlas MR voxel size. This study suggests that the atlas-based algorithm can be generalised to MR data acquired using a different field strength at a different centre.

  19. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  20. Effects of insertion speed and trocar stiffness on the accuracy of needle position for brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.

    2012-04-15

    Purpose: In prostate brachytherapy, accurate positioning of the needle tip to place radioactive seeds at its target site is critical for successful radiation treatment. During the procedure, needle deflection leads to seed misplacement and suboptimal radiation dose to cancerous cells. In practice, radiation oncologists commonly use high-speed hand needle insertion to minimize displacement of the prostate as well as the needle deflection. Effects of speed during needle insertion and stiffness of trocar (a solid rod inside the hollow cannula) on needle deflection are studied. Methods: Needle insertion experiments into phantom were performed using a 2{sup 2} factorial design (2 parametersmore » at 2 levels), with each condition having replicates. Analysis of the deflection data included calculating the average, standard deviation, and analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: The stiffer tungsten carbide trocar is effective in reducing the average and standard deviation of needle deflection. The fast insertion speed together with the stiffer trocar generated the smallest average and standard deviation for needle deflection for almost all cases. Conclusions: The combination of stiff tungsten carbide trocar and fast needle insertion speed are important to decreasing needle deflection. The knowledge gained from this study can be used to improve the accuracy of needle insertion during brachytherapy procedures.« less

  1. Differentiating epileptic from non-epileptic high frequency intracerebral EEG signals with measures of wavelet entropy.

    PubMed

    Mooij, Anne H; Frauscher, Birgit; Amiri, Mina; Otte, Willem M; Gotman, Jean

    2016-12-01

    To assess whether there is a difference in the background activity in the ripple band (80-200Hz) between epileptic and non-epileptic channels, and to assess whether this difference is sufficient for their reliable separation. We calculated mean and standard deviation of wavelet entropy in 303 non-epileptic and 334 epileptic channels from 50 patients with intracerebral depth electrodes and used these measures as predictors in a multivariable logistic regression model. We assessed sensitivity, positive predictive value (PPV) and negative predictive value (NPV) based on a probability threshold corresponding to 90% specificity. The probability of a channel being epileptic increased with higher mean (p=0.004) and particularly with higher standard deviation (p<0.0001). The performance of the model was however not sufficient for fully classifying the channels. With a threshold corresponding to 90% specificity, sensitivity was 37%, PPV was 80%, and NPV was 56%. A channel with a high standard deviation of entropy is likely to be epileptic; with a threshold corresponding to 90% specificity our model can reliably select a subset of epileptic channels. Most studies have concentrated on brief ripple events. We showed that background activity in the ripple band also has some ability to discriminate epileptic channels. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. LH750 hematology analyzers to identify malaria and dengue and distinguish them from other febrile illnesses.

    PubMed

    Sharma, P; Bhargava, M; Sukhachev, D; Datta, S; Wattal, C

    2014-02-01

    Tropical febrile illnesses such as malaria and dengue are challenging to differentiate clinically. Automated cellular indices from hematology analyzers may afford a preliminary rapid distinction. Blood count and VCS parameters from 114 malaria patients, 105 dengue patients, and 105 febrile controls without dengue or malaria were analyzed. Statistical discriminant functions were generated, and their diagnostic performances were assessed by ROC curve analysis. Three statistical functions were generated: (i) malaria-vs.-controls factor incorporating platelet count and standard deviations of lymphocyte volume and conductivity that identified malaria with 90.4% sensitivity, 88.6% specificity; (ii) dengue-vs.-controls factor incorporating platelet count, lymphocyte percentage and standard deviation of lymphocyte conductivity that identified dengue with 81.0% sensitivity and 77.1% specificity; and (iii) febrile-controls-vs.-malaria/dengue factor incorporating mean corpuscular hemoglobin concentration, neutrophil percentage, mean lymphocyte and monocyte volumes, and standard deviation of monocyte volume that distinguished malaria and dengue from other febrile illnesses with 85.1% sensitivity and 91.4% specificity. Leukocyte abnormalities quantitated by automated analyzers successfully identified malaria and dengue and distinguished them from other fevers. These economic discriminant functions can be rapidly calculated by analyzer software programs to generate electronic flags to trigger-specific testing. They could potentially transform diagnostic approaches to tropical febrile illnesses in cost-constrained settings. © 2013 John Wiley & Sons Ltd.

  3. Visualizing excipient composition and homogeneity of Compound Liquorice Tablets by near-infrared chemical imaging

    NASA Astrophysics Data System (ADS)

    Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang

    2012-02-01

    This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.

  4. Variability in Wechsler Adult Intelligence Scale-IV subtest performance across age.

    PubMed

    Wisdom, Nick M; Mignogna, Joseph; Collins, Robert L

    2012-06-01

    Normal Wechsler Adult Intelligence Scale (WAIS)-IV performance relative to average normative scores alone can be an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs with increasing age. The purpose of the present study is to characterize the patterns of raw score change and associated variability on WAIS-IV subtests across age groupings. Raw WAIS-IV subtest means and standard deviations for each age group were tabulated from the WAIS-IV normative manual along with the coefficient of variation (CV), a measure of score dispersion calculated by dividing the standard deviation by the mean and multiplying by 100. The CV further informs the magnitude of variability represented by each standard deviation. Raw mean scores predictably decreased across age groups. Increased variability was noted in Perceptual Reasoning and Processing Speed Index subtests, as Block Design, Matrix Reasoning, Picture Completion, Symbol Search, and Coding had CV percentage increases ranging from 56% to 98%. In contrast, Working Memory and Verbal Comprehension subtests were more homogeneous with Digit Span, Comprehension, Information, and Similarities percentage of the mean increases ranging from 32% to 43%. Little change in the CV was noted on Cancellation, Arithmetic, Letter/Number Sequencing, Figure Weights, Visual Puzzles, and Vocabulary subtests (<14%). A thorough understanding of age-related subtest variability will help to identify test limitations as well as further our understanding of cognitive domains which remain relatively steady versus those which steadily decline.

  5. Comparison of Matrix Frequency-Doubling Technology (FDT) Perimetry with the SWEDISH Interactive Thresholding Algorithm (SITA) Standard Automated Perimetry (SAP) in Mild Glaucoma.

    PubMed

    Doozandeh, Azadeh; Irandoost, Farnoosh; Mirzajani, Ali; Yazdani, Shahin; Pakravan, Mohammad; Esfandiari, Hamed

    2017-01-01

    This study aimed to compare second-generation frequency-doubling technology (FDT) perimetry with standard automated perimetry (SAP) in mild glaucoma. Forty-seven eyes of 47 participants who had mild visual field defect by SAP were included in this study. All participants were examined using SITA 24-2 (SITA-SAP) and matrix 24-2 (Matrix-FDT). The correlations of global indices and the number of defects on pattern deviation (PD) plots were determined. Agreement between two sets regarding the stage of visual field damage was assessed. Pearson's correlation, intra-cluster comparison, paired t-test, and 95% limit of agreement were calculated. Although there was no significant difference between global indices, the agreement between the two devices regarding the global indices was weak (the limit of agreement for mean deviation was -6.08 to 6.08 and that for pattern standard deviation was -4.42 to 3.42). The agreement between SITA-SAP and Matrix-FDT regarding the Glaucoma Hemifield Test (GHT) and the number of defective points in each quadrant and staging of the visual field damage was also weak. Because the correlation between SITA-SAP and Matrix-FDT regarding global indices, GHT, number of defective points, and stage of the visual field damage in mild glaucoma is weak, Matrix-FDT cannot be used interchangeably with SITA-SAP in the early stages of glaucoma.

  6. A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    1999-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.

  7. Improving the quality of child anthropometry: Manual anthropometry in the Body Imaging for Nutritional Assessment Study (BINA).

    PubMed

    Conkle, Joel; Ramakrishnan, Usha; Flores-Ayala, Rafael; Suchdev, Parminder S; Martorell, Reynaldo

    2017-01-01

    Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016-17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements.

  8. Low-Lying π* Resonances of Standard and Rare DNA and RNA Bases Studied by the Projected CAP/SAC-CI Method.

    PubMed

    Kanazawa, Yuki; Ehara, Masahiro; Sommerfeld, Thomas

    2016-03-10

    Low-lying π* resonance states of DNA and RNA bases have been investigated by the recently developed projected complex absorbing potential (CAP)/symmetry-adapted cluster-configuration interaction (SAC-CI) method using a smooth Voronoi potential as CAP. In spite of the challenging CAP applications to higher resonance states of molecules of this size, the present calculations reproduce resonance positions observed by electron transmission spectra (ETS) provided the anticipated deviations due to vibronic effects and limited basis sets are taken into account. Moreover, for the standard nucleobases, the calculated positions and widths qualitatively agree with those obtained in previous electron scattering calculations. For guanine, both keto and enol forms were examined, and the calculated values of the keto form agree clearly better with the experimental findings. In addition to these standard bases, three modified forms of cytosine, which serve as epigenetic or biomarkers, were investigated: formylcytosine, methylcytosine, and chlorocytosine. Last, a strong correlation between the computed positions and the observed ETS values is demonstrated, clearly suggesting that the present computational protocol should be useful for predicting the π* resonances of congeners of DNA and RNA bases.

  9. Evaluation of a simplified gross thrust calculation technique using two prototype F100 turbofan engines in an altitude facility

    NASA Technical Reports Server (NTRS)

    Kurtenbach, F. J.

    1979-01-01

    The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.

  10. Strong evidence for ZZ production in pp[over] collisions at sqrt[s]=1.96 TeV.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S; Group, R C

    2008-05-23

    We report the first evidence of Z boson pair production at a hadron collider with a significance exceeding 4 standard deviations. This result is based on a data sample corresponding to 1.9 fb(-1) of integrated luminosity from pp[over] collisions at sqrt[s]=1.96 TeV collected with the Collider Detector at Fermilab II detector. In the lll'l' channel, we observe three ZZ candidates with an expected background of 0.096(-0.063)+0.092 events. In the llnunu channel, we use a leading-order calculation of the relative ZZ and WW event probabilities to discriminate between signal and background. In the combination of lll'l' and llnunu channels, we observe an excess of events with a probability of 5.1 x 10(-6) to be due to the expected background. This corresponds to a significance of 4.4 standard deviations. The measured cross section is sigma(pp[over]-->ZZ)=1.4(-0.6)+0.7(stat+syst) pb, consistent with the standard model expectation.

  11. Comparing Standard Deviation Effects across Contexts

    ERIC Educational Resources Information Center

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  12. Geometric Verification of Dynamic Wave Arc Delivery With the Vero System Using Orthogonal X-ray Fluoroscopic Imaging.

    PubMed

    Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Gevaert, Thierry; Depuydt, Tom; Tournel, Koen; Hung, Cecilia; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark

    2015-07-15

    The purpose of this study was to define an independent verification method based on on-board orthogonal fluoroscopy to determine the geometric accuracy of synchronized gantry-ring (G/R) rotations during dynamic wave arc (DWA) delivery available on the Vero system. A verification method for DWA was developed to calculate O-ring-gantry (G/R) positional information from ball-bearing positions retrieved from fluoroscopic images of a cubic phantom acquired during DWA delivery. Different noncoplanar trajectories were generated in order to investigate the influence of path complexity on delivery accuracy. The G/R positions detected from the fluoroscopy images (DetPositions) were benchmarked against the G/R angulations retrieved from the control points (CP) of the DWA RT plan and the DWA log files recorded by the treatment console during DWA delivery (LogActed). The G/R rotational accuracy was quantified as the mean absolute deviation ± standard deviation. The maximum G/R absolute deviation was calculated as the maximum 3-dimensional distance between the CP and the closest DetPositions. In the CP versus DetPositions comparison, an overall mean G/R deviation of 0.13°/0.16° ± 0.16°/0.16° was obtained, with a maximum G/R deviation of 0.6°/0.2°. For the LogActed versus DetPositions evaluation, the overall mean deviation was 0.08°/0.15° ± 0.10°/0.10° with a maximum G/R of 0.3°/0.4°. The largest decoupled deviations registered for gantry and ring were 0.6° and 0.4° respectively. No directional dependence was observed between clockwise and counterclockwise rotations. Doubling the dose resulted in a double number of detected points around each CP, and an angular deviation reduction in all cases. An independent geometric quality assurance approach was developed for DWA delivery verification and was successfully applied on diverse trajectories. Results showed that the Vero system is capable of following complex G/R trajectories with maximum deviations during DWA below 0.6°. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. SU-F-P-56: On a New Approach to Reconstruct the Patient Dose From Phantom Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangtsson, E; Vries, W de

    Purpose: The development of complex radiation treatment schemes emphasizes the need for advanced QA analysis methods to ensure patient safety. One such tool is the Delta4 DVH Anatomy software, where the patient dose is reconstructed from phantom measurements. Deviations in the measured dose are transferred to the patient anatomy and their clinical impact is evaluated in situ. Results from the original algorithm revealed weaknesses that may introduce artefacts in the reconstructed dose. These can lead to false negatives or obscure the effects of minor dose deviations from delivery failures. Here, we will present results from a new patient dose reconstructionmore » algorithm. Methods: The main steps of the new algorithm are: (1) the dose delivered to a phantom is measured in a number of detector positions. (2) The measured dose is compared to an internally calculated dose distribution evaluated in said positions. The so-obtained dose difference is (3) used to calculate an energy fluence difference. This entity is (4) used as input to a patient dose correction calculation routine. Finally, the patient dose is reconstructed by adding said patient dose correction to the planned patient dose. The internal dose calculation in step (2) and (4) is based on the Pencil Beam algorithm. Results: The new patient dose reconstruction algorithm have been tested on a number of patients and the standard metrics dose deviation (DDev), distance-to-agreement (DTA) and Gamma index are improved when compared to the original algorithm. In a certain case the Gamma index (3%/3mm) increases from 72.9% to 96.6%. Conclusion: The patient dose reconstruction algorithm is improved. This leads to a reduction in non-physical artefacts in the reconstructed patient dose. As a consequence, the possibility to detect deviations in the dose that is delivered to the patient is improved. An increase in Gamma index for the PTV can be seen. The corresponding author is an employee of ScandiDos.« less

  14. Blood pressure variability in man: its relation to high blood pressure, age and baroreflex sensitivity.

    PubMed

    Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A

    1980-12-01

    1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.

  15. The Effects of Data Gaps on the Calculated Monthly Mean Maximum and Minimum Temperatures in the Continental United States: A Spatial and Temporal Study.

    NASA Astrophysics Data System (ADS)

    Stooksbury, David E.; Idso, Craig D.; Hubbard, Kenneth G.

    1999-05-01

    Gaps in otherwise regularly scheduled observations are often referred to as missing data. This paper explores the spatial and temporal impacts that data gaps in the recorded daily maximum and minimum temperatures have on the calculated monthly mean maximum and minimum temperatures. For this analysis 138 climate stations from the United States Historical Climatology Network Daily Temperature and Precipitation Data set were selected. The selected stations had no missing maximum or minimum temperature values during the period 1951-80. The monthly mean maximum and minimum temperatures were calculated for each station for each month. For each month 1-10 consecutive days of data from each station were randomly removed. This was performed 30 times for each simulated gap period. The spatial and temporal impact of the 1-10-day data gaps were compared. The influence of data gaps is most pronounced in the continental regions during the winter and least pronounced in the southeast during the summer. In the north central plains, 10-day data gaps during January produce a standard deviation value greater than 2°C about the `true' mean. In the southeast, 10-day data gaps in July produce a standard deviation value less than 0.5°C about the mean. The results of this study will be of value in climate variability and climate trend research as well as climate assessment and impact studies.

  16. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  17. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  18. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-11-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  19. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-09-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  20. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  1. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  2. Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography

    DTIC Science & Technology

    2017-05-01

    contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human

  3. Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.

    PubMed

    Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael

    2013-02-01

    The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.

  4. Robust tissue-air volume segmentation of MR images based on the statistics of phase and magnitude: Its applications in the display of susceptibility-weighted imaging of the brain.

    PubMed

    Du, Yiping P; Jin, Zhaoyang

    2009-10-01

    To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.

  5. Standard deviation of the mean and other time series properties of voltages measured with a digital lock-in amplifier

    NASA Astrophysics Data System (ADS)

    Witt, Thomas J.; Fletcher, N. E.

    2010-10-01

    We investigate some statistical properties of ac voltages from a white noise source measured with a digital lock-in amplifier equipped with finite impulse response output filters which introduce correlations between successive voltage values. The main goal of this work is to propose simple solutions to account for correlations when calculating the standard deviation of the mean (SDM) for a sequence of measurement data acquired using such an instrument. The problem is treated by time series analysis based on a moving average model of the filtering process. Theoretical expressions are derived for the power spectral density (PSD), the autocorrelation function, the equivalent noise bandwidth and the Allan variance; all are related to the SDM. At most three parameters suffice to specify any of the above quantities: the filter time constant, the time between successive measurements (both set by the lock-in operator) and the PSD of the white noise input, h0. Our white noise source is a resistor so that the PSD is easily calculated; there are no free parameters. Theoretical expressions are checked against their respective sample estimates and, with the exception of two of the bandwidth estimates, agreement to within 11% or better is found.

  6. SU-F-T-177: Impacts of Gantry Angle Dependent Scanning Beam Properties for Proton Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Y; Clasie, B; Lu, H

    Purpose: In pencil beam scanning (PBS), the delivered spot MU, position and size are slightly different at different gantry angles. We investigated the level of delivery uncertainty at different gantry angles through a log file analysis. Methods: 34 PBS fields covering full 360 degrees gantry angle spread were collected retrospectively from 28 patients treated at our institution. All fields were delivered at zero gantry angle and the prescribed gantry angle, and measured at isocenter with the MatriXX 2D array detector at the prescribed gantry angle. The machine log files were analyzed to extract the delivered MU per spot and themore » beam position from the strip ionization chambers in the treatment nozzle. The beam size was separately measured as a function of gantry angle and beam energy. Using this information, the dose was calculated in a water phantom at both gantry angles and compared to the measurement using the 3D γ-index at 2mm/2%. Results: The spot-by-spot difference between the beam position in the log files from the delivery at the two gantry angles has a mean of 0.3 and 0.4 mm and a standard deviation of 0.6 and 0.7 mm for × and y directions, respectively. Similarly, the spot-by-spot difference between the MU in the log files from the delivery at the two gantry angles has a mean 0.01% and a standard deviation of 0.7%. These small deviations lead to an excellent agreement in dose calculations with an average γ pass rate for all fields being approximately 99.7%. When each calculation is compared to the measurement, a high correlation in γ was also found. Conclusion: Using machine logs files, we verified that PBS beam delivery at different gantry angles are sufficiently small and the planned spot position and MU. This study brings us one step closer to simplifying our patient-specific QA.« less

  7. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension.

    PubMed

    Jones, Deborah P; Richey, Phyllis A; Alpert, Bruce S

    2009-06-01

    The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Blood pressure indices, blood pressure loads and standard deviation scores were calculated using the original ABPM and the modified reference standards. Bland-Altman plots and kappa statistics for the level of agreement were generated. Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Depending on which version of the German Working Group's reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary.

  8. Comparison of ambulatory blood pressure reference standards in children evaluated for hypertension

    PubMed Central

    Jones, Deborah P.; Richey, Phyllis A.; Alpert, Bruce S.

    2009-01-01

    Objective The purpose of this study was to systematically compare methods for standardization of blood pressure levels obtained by ambulatory blood pressure monitoring (ABPM) in a group of 111 children studied at our institution. Methods Blood pressure indices, blood pressure loads and standard deviation scores were calculated using he original ABPM and the modified reference standards. Bland—Altman plots and kappa statistics for the level of agreement were generated. Results Overall, the agreement between the two methods was excellent; however, approximately 5% of children were classified differently by one as compared with the other method. Conclusion Depending on which version of the German Working Group’s reference standards is used for interpretation of ABPM data, the classification of the individual as having hypertension or normal blood pressure may vary. PMID:19433980

  9. SU-F-J-47: Inherent Uncertainty in the Positional Shifts Determined by a Volumetric Cone Beam Imaging System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giri, U; Ganesh, T; Saini, V

    2016-06-15

    Purpose: To quantify inherent uncertainty associated with a volumetric imaging system in its determination of positional shifts. Methods: The study was performed on an Elekta Axesse™ linac’s XVI cone beam computed tomography (CBCT) system. A CT image data set of a Penta- Guide phantom was used as reference image by placing isocenter at the center of the phantom.The phantom was placed arbitrarily on the couch close to isocenter and CBCT images were obtained. The CBCT dataset was matched with the reference image using XVI software and the shifts were determined in 6-dimensions. Without moving the phantom, this process was repeatedmore » 20 times consecutively within 30 minutes on a single day. Mean shifts and their standard deviations in all 6-dimensions were determined for all the 20 instances of imaging. For any given day, the first set of shifts obtained was kept as reference and the deviations of the subsequent 19 sets from the reference set were scored. Mean differences and their standard deviations were determined. In this way, data were obtained for 30 consecutive working days. Results: Tabulating the mean deviations and their standard deviations observed on each day for the 30 measurement days, systematic and random errors in the determination of shifts by XVI software were calculated. The systematic errors were found to be 0.03, 0.04 and 0.03 mm while random errors were 0.05, 0.06 and 0.06 mm in lateral, craniocaudal and anterio-posterior directions respectively. For rotational shifts, the systematic errors were 0.02°, 0.03° and 0.03° and random errors were 0.06°, 0.05° and 0.05° in pitch, roll and yaw directions respectively. Conclusion: The inherent uncertainties in every image guidance system should be assessed and baseline values established at the time of its commissioning. These shall be periodically tested as part of the QA protocol.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazelaar, Colien, E-mail: c.hazelaar@vumc.nl; Dahele, Max; Mostafavi, Hassan

    Purpose: Spine stereotactic body radiation therapy (SBRT) requires highly accurate positioning. We report our experience with markerless template matching and triangulation of kilovoltage images routinely acquired during spine SBRT, to determine spine position. Methods and Materials: Kilovoltage images, continuously acquired at 7, 11 or 15 frames/s during volumetric modulated spine SBRT of 18 patients, consisting of 93 fluoroscopy datasets (1 dataset/arc), were analyzed off-line. Four patients were immobilized in a head/neck mask, 14 had no immobilization. Two-dimensional (2D) templates were created for each gantry angle from planning computed tomography data and registered to prefiltered kilovoltage images to determine 2D shiftsmore » between actual and planned spine position. Registrations were considered valid if the normalized cross correlation score was ≥0.15. Multiple registrations were triangulated to determine 3D position. For each spine position dataset, average positional offset and standard deviation were calculated. To verify the accuracy and precision of the technique, mean positional offset and standard deviation for twenty stationary phantom datasets with different baseline shifts were measured. Results: For the phantom, average standard deviations were 0.18 mm for left-right (LR), 0.17 mm for superior-inferior (SI), and 0.23 mm for the anterior-posterior (AP) direction. Maximum difference in average detected and applied shift was 0.09 mm. For the 93 clinical datasets, the percentage of valid matched frames was, on average, 90.7% (range: 49.9-96.1%) per dataset. Average standard deviations for all datasets were 0.28, 0.19, and 0.28 mm for LR, SI, and AP, respectively. Spine position offsets were, on average, −0.05 (range: −1.58 to 2.18), −0.04 (range: −3.56 to 0.82), and −0.03 mm (range: −1.16 to 1.51), respectively. Average positional deviation was <1 mm in all directions in 92% of the arcs. Conclusions: Template matching and triangulation using kilovoltage images acquired during irradiation allows spine position detection with submillimeter accuracy at subsecond intervals. Although the majority of patients were not immobilized, most vertebrae were stable at the sub-mm level during spine SBRT delivery.« less

  11. Glaucoma progression detection with frequency doubling technology (FDT) compared to standard automated perimetry (SAP) in the Groningen Longitudinal Glaucoma Study.

    PubMed

    Wesselink, Christiaan; Jansonius, Nomdo M

    2017-09-01

    To determine the usefulness of frequency doubling perimetry (FDT) for progression detection in glaucoma, compared to standard automated perimetry (SAP). Data were used from 150 eyes of 150 glaucoma patients from the Groningen Longitudinal Glaucoma Study. After baseline, SAP was performed approximately yearly; FDT every other year. First and last visit had to contain both tests. Using linear regression, progression velocities were calculated for SAP (Humphrey Field Analyzer) mean deviation (MD) and FDT MD and the number of test locations with a total deviation probability below p < 0.01 (TD). Progression velocity tertiles were determined and eyes were classified as slowly, intermediately, or fast progressing for both techniques. Comparison between SAP and FDT classifications were made using a Mantel Haenszel chi-square test. Longitudinal signal-to-noise ratios (LSNRs) were calculated, per patient and per technique, defined as progression velocity divided by the standard deviation of the residuals. Mean (SD) follow-up was 6.4 (1.7) years; median (interquartile range [IQR]) baseline SAP MD -6.6 (-14.2 to -3.6) dB. On average 8.2 and 4.5 tests were performed for SAP and FDT, respectively. Median (IQR) MD slope was -0.16 (-0.46 to +0.02) dB/year for SAP and -0.05 (-0.39 to +0.17) dB/year for FDT. Mantel Haenszel chi-squares of SAP MD vs FDT MD and TD were 12.5 (p < 0.001) and 15.8 (p < 0.001), respectively. LSNRs for SAP MD (median -0.17 yr -1 ) were better than those for FDT MD (-0.04 yr -1 ; p = 0.010). FDT may be a useful technique for monitoring glaucoma progression in patients who cannot perform SAP reliably. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  12. Technical Note: Radiation properties of tissue- and water-equivalent materials formulated using the stoichiometric analysis method in charged particle therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yohannes, Indra; Vasiliniuc, Stefan; Hild, Sebastian

    Purpose: Five tissue- and water-equivalent materials (TEMs) mimicking ICRU real tissues have been formulated using a previously established stoichiometric analysis method (SAM) to be applied in charged particle therapy. The purpose of this study was an experimental verification of the TEMs-SAM against charged particle beam measurements and for different computed tomography (CT) scanners. The potential of the TEMs-SAM to be employed in the dosimetry was also investigated. Methods: Experimental verification with three CT scanners was carried out to validate the calculated Hounsfield units (HUs) of the TEMs. Water-equivalent path lengths (WEPLs) of the TEMs for proton (106.8 MeV/u), helium (107.93more » MeV/u), and carbon (200.3 MeV/u) ions were measured to be compared with the computed relative stopping powers. HU calibration curves were also generated. Results: Differences between the measured HUs of the TEMs and the calculated HUs of the ICRU real tissues for all CT scanners were smaller than 4 HU except for the skeletal tissues which deviated up to 21 HU. The measured WEPLs verified the calculated WEPLs of the TEMs (maximum deviation was 0.17 mm) and were in good agreement with the calculated WEPLs of the ICRU real tissues (maximum deviation was 0.23 mm). Moreover, the relative stopping powers converted from the measured WEPLs differed less than 0.8% and 1.3% from the calculated values of the SAM and the ICRU, respectively. Regarding the relative nonelastic cross section per unit of volume for 200 MeV protons, the ICRU real tissues were generally well represented by the TEMs except for adipose which differed 3.8%. Further, the HU calibration curves yielded the mean and the standard deviation of the errors not larger than 0.5% and 1.9%, respectively. Conclusions: The results of this investigation implied the potential of the TEMs formulated using the SAM to be employed for both, beam dosimetry and HU calibration in charged particle therapy.« less

  13. Duality linking standard and tachyon scalar field cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avelino, P. P.; Bazeia, D.; Losano, L.

    2010-09-15

    In this work we investigate the duality linking standard and tachyon scalar field homogeneous and isotropic cosmologies in N+1 dimensions. We determine the transformation between standard and tachyon scalar fields and between their associated potentials, corresponding to the same background evolution. We show that, in general, the duality is broken at a perturbative level, when deviations from a homogeneous and isotropic background are taken into account. However, we find that for slow-rolling fields the duality is still preserved at a linear level. We illustrate our results with specific examples of cosmological relevance, where the correspondence between scalar and tachyon scalarmore » field models can be calculated explicitly.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carver, R; Popple, R; Benhabib, S

    Purpose: To evaluate the accuracy of electron dose distribution calculated by the Varian Eclipse electron Monte Carlo (eMC) algorithm for use with recent commercially available bolus electron conformal therapy (ECT). Methods: eMC-calculated electron dose distributions for bolus ECT have been compared to those previously measured for cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV CT anatomy for each site. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The bolus ECT treatment plans were imported into the Eclipse treatment planning system and calculated using the maximum allowable histories (2×10{sup 9}),more » resulting in a statistical error of <0.2%. Smoothing was not used for these calculations. Differences between eMC-calculated and measured dose distributions were evaluated in terms of absolute dose difference as well as distance to agreement (DTA). Results: Results from the eMC for the retromolar trigone phantom showed 89% (41/46) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of −0.12% with a standard deviation of 2.56%. Results for the nose phantom showed 95% (54/57) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of 1.12% with a standard deviation of 3.03%. Dose calculation times for the retromolar trigone and nose treatment plans were 15 min and 22 min, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a Varian Eclipse framework agent server (FAS). Results of this study were consistent with those previously reported for accuracy of the eMC electron dose algorithm and for the .decimal, Inc. pencil beam redefinition algorithm used to plan the bolus. Conclusion: These results show that the accuracy of the Eclipse eMC algorithm is suitable for clinical implementation of bolus ECT.« less

  15. Exploring Students' Conceptions of the Standard Deviation

    ERIC Educational Resources Information Center

    delMas, Robert; Liu, Yan

    2005-01-01

    This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…

  16. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  17. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  18. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  19. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  20. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    ERIC Educational Resources Information Center

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  1. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  2. 7 CFR 801.6 - Tolerances for moisture meters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...

  3. TU-G-BRD-04: A Round Robin Dosimetry Intercomparison of Gamma Stereotactic Radiosurgery Calibration Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drzymala, R; Alvarez, P; Bednarz, G

    2015-06-15

    Purpose: The purpose of this multi-institutional study was to compare two new gamma stereotactic radiosurgery (GSRS) dosimetry protocols to existing calibration methods. The ultimate goal was to guide AAPM Task Group 178 in recommending a standard GSRS dosimetry protocol. Methods: Nine centers (ten GSRS units) participated in the study. Each institution made eight sets of dose rate measurements: six with two different ionization chambers in three different 160mm-diameter spherical phantoms (ABS plastic, Solid Water and liquid water), and two using the same ionization chambers with a custom in-air positioning jig. Absolute dose rates were calculated using a newly proposed formalismmore » by the IAEA working group for small and non-standard radiation fields and with a new air-kerma based protocol. The new IAEA protocol requires an in-water ionization chamber calibration and uses previously reported Monte-Carlo generated factors to account for the material composition of the phantom, the type of ionization chamber, and the unique GSRS beam configuration. Results obtained with the new dose calibration protocols were compared to dose rates determined by the AAPM TG-21 and TG-51 protocols, with TG-21 considered as the standard. Results: Averaged over all institutions, ionization chambers and phantoms, the mean dose rate determined with the new IAEA protocol relative to that determined with TG-21 in the ABS phantom was 1.000 with a standard deviation of 0.008. For TG-51, the average ratio was 0.991 with a standard deviation of 0.013, and for the new in-air formalism it was 1.008 with a standard deviation of 0.012. Conclusion: Average results with both of the new protocols agreed with TG-21 to within one standard deviation. TG-51, which does not take into account the unique GSRS beam configuration or phantom material, was not expected to perform as well as the new protocols. The new IAEA protocol showed remarkably good agreement with TG-21. Conflict of Interests: Paula Petti, Josef Novotny, Gennady Neyman and Steve Goetsch are consultants for Elekta Instrument A/B; Elekta Instrument AB, PTW Freiburg GmbH, Standard Imaging, Inc., and The Phantom Laboratory, Inc. loaned equipment for use in these experiments; The University of Wisconsin Accredited Dosimetry Calibration Laboratory provided calibration services.« less

  4. Concentration sensor based on a tilted fiber Bragg grating for anions monitoring

    NASA Astrophysics Data System (ADS)

    Melo, L. B.; Rodrigues, J. M. M.; Farinha, A. S. F.; Marques, C. A.; Bilro, L.; Alberto, N.; Tomé, J. P. C.; Nogueira, R. N.

    2014-08-01

    The ubiquity and importance of anions in many crucial roles accounts for the current high interest in the design and preparation of effective sensors for these species. Therefore, a tilted fiber Bragg grating sensor was fabricated to investigate individual detection of different anion concentrations in ethyl acetate, namely acetate, fluoride and chloride. The influence of the refractive index on the transmission spectrum of a tilted fiber Bragg grating was determined by developing a new demodulation method. This is based on the calculation of the standard deviation between the cladding modes of the transmission spectrum and its smoothing function. The standard deviation method was used to monitor concentrations of different anions. The sensor resolution obtained for the anion acetate, fluoride and chloride is 79 × 10-5 mol/dm3, 119 × 10-5 mol/dm3 and 78 × 10-5 mol/dm3, respectively, within the concentration range of (39-396) × 10-5 mol/dm3.

  5. Real-time combustion control and diagnostics sensor-pressure oscillation monitor

    DOEpatents

    Chorpening, Benjamin T [Morgantown, WV; Thornton, Jimmy [Morgantown, WV; Huckaby, E David [Morgantown, WV; Richards, George A [Morgantown, WV

    2009-07-14

    An apparatus and method for monitoring and controlling the combustion process in a combustion system to determine the amplitude and/or frequencies of dynamic pressure oscillations during combustion. An electrode in communication with the combustion system senses hydrocarbon ions and/or electrons produced by the combustion process and calibration apparatus calibrates the relationship between the standard deviation of the current in the electrode and the amplitudes of the dynamic pressure oscillations by applying a substantially constant voltage between the electrode and ground resulting in a current in the electrode and by varying one or more of (1) the flow rate of the fuel, (2) the flow rate of the oxidant, (3) the equivalence ratio, (4) the acoustic tuning of the combustion system, and (5) the fuel distribution in the combustion chamber such that the amplitudes of the dynamic pressure oscillations in the combustion chamber are calculated as a function of the standard deviation of the electrode current. Thereafter, the supply of fuel and/or oxidant is varied to modify the dynamic pressure oscillations.

  6. Magneto-acupuncture stimuli effects on ultraweak photon emission from hands of healthy persons.

    PubMed

    Park, Sang-Hyun; Kim, Jungdae; Koo, Tae-Hoi

    2009-03-01

    We investigated ultraweak photon emissions from the hands of 45 healthy persons before and after magneto-acupuncture stimuli. Photon emissions were measured by using two photomultiplier tubes in the spectral range of UV and visible. Several statistical quantities such as the average intensity, the standard deviation, the delta-value, and the degree of asymmetry were calculated from the measurements of photon emissions before and after the magneto-acupuncture stimuli. The distributions of the quantities from the measurements with the magneto-acupuncture stimuli were more differentiable than those of the groups without any stimuli and with the sham magnets. We also analyzed the magneto-acupuncture stimuli effects on the photon emissions through a year-long measurement for two subjects. The individualities of the subjects increased the differences of photon emissions compared to the above group study before and after magnetic stimuli. The changes on the ultraweak photon emission rates of hand for the magnet group were detected conclusively in the quantities of the averages and standard deviations.

  7. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  8. Multiscale analysis of the CMB temperature derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcos-Caballero, A.; Martínez-González, E.; Vielva, P., E-mail: marcos@ifca.unican.es, E-mail: martinez@ifca.unican.es, E-mail: vielva@ifca.unican.es

    2017-02-01

    We study the Planck CMB temperature at different scales through its derivatives up to second order, which allows one to characterize the local shape and isotropy of the field. The problem of having an incomplete sky in the calculation and statistical characterization of the derivatives is addressed in the paper. The analysis confirms the existence of a low variance in the CMB at large scales, which is also noticeable in the derivatives. Moreover, deviations from the standard model in the gradient, curvature and the eccentricity tensor are studied in terms of extreme values on the data. As it is expected,more » the Cold Spot is detected as one of the most prominent peaks in terms of curvature, but additionally, when the information of the temperature and its Laplacian are combined, another feature with similar probability at the scale of 10{sup o} is also observed. However, the p -value of these two deviations increase above the 6% when they are referred to the variance calculated from the theoretical fiducial model, indicating that these deviations can be associated to the low variance anomaly. Finally, an estimator of the directional anisotropy for spinorial quantities is introduced, which is applied to the spinors derived from the field derivatives. An anisotropic direction whose probability is <1% is detected in the eccentricity tensor.« less

  9. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  10. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less

  11. Ambulatory blood pressure monitoring-derived short-term blood pressure variability in primary hyperparathyroidism.

    PubMed

    Concistrè, A; Grillo, A; La Torre, G; Carretta, R; Fabris, B; Petramala, L; Marinelli, C; Rebellato, A; Fallo, F; Letizia, C

    2018-04-01

    Primary hyperparathyroidism is associated with a cluster of cardiovascular manifestations, including hypertension, leading to increased cardiovascular risk. The aim of our study was to investigate the ambulatory blood pressure monitoring-derived short-term blood pressure variability in patients with primary hyperparathyroidism, in comparison with patients with essential hypertension and normotensive controls. Twenty-five patients with primary hyperparathyroidism (7 normotensive,18 hypertensive) underwent ambulatory blood pressure monitoring at diagnosis, and fifteen out of them were re-evaluated after parathyroidectomy. Short-term-blood pressure variability was derived from ambulatory blood pressure monitoring and calculated as the following: 1) Standard Deviation of 24-h, day-time and night-time-BP; 2) the average of day-time and night-time-Standard Deviation, weighted for the duration of the day and night periods (24-h "weighted" Standard Deviation of BP); 3) average real variability, i.e., the average of the absolute differences between all consecutive BP measurements. Baseline data of normotensive and essential hypertension patients were matched for age, sex, BMI and 24-h ambulatory blood pressure monitoring values with normotensive and hypertensive-primary hyperparathyroidism patients, respectively. Normotensive-primary hyperparathyroidism patients showed a 24-h weighted Standard Deviation (P < 0.01) and average real variability (P < 0.05) of systolic blood pressure higher than that of 12 normotensive controls. 24-h average real variability of systolic BP, as well as serum calcium and parathyroid hormone levels, were reduced in operated patients (P < 0.001). A positive correlation of serum calcium and parathyroid hormone with 24-h-average real variability of systolic BP was observed in the entire primary hyperparathyroidism patients group (P = 0.04, P  = 0.02; respectively). Systolic blood pressure variability is increased in normotensive patients with primary hyperparathyroidism and is reduced by parathyroidectomy, and may potentially represent an additional cardiovascular risk factor in this disease.

  12. Global, Hemispheric, and Zonal Temperature Deviations Derived From a 63-Station Radiosonde Network

    DOE Data Explorer

    Angell, J. K. [NOAA, Air Resources Laboratory

    2011-01-01

    Surface temperatures and thickness-derived temperatures from a 63-station, globally distributed radiosonde network have been used to estimate global, hemispheric, and zonal annual and seasonal temperature deviations. Most of the temperature values used were column-mean temperatures, obtained from the differences in height (thickness) between constant-pressure surfaces at individual radiosonde stations. The pressure-height data before 1980 were obtained from published values in Monthly Climatic Data for the World. Between 1980 and 1990, Angell used data from both the Climatic Data for the World and the Global Telecommunications System (GTS) Network received at the National Meteorological Center. Between 1990 and 1995, the data were obtained only from GTS, and since 1995 the data have been obtained from National Center for Atmospheric Research files. The data are evaluated as deviations from the mean based on the interval 1958-1977. The station deviations have been averaged (with equal weighting) to obtain annual and seasonal temperature deviations for the globe, the Northern and Southern Hemispheres, and the following latitudinal zones: North (60° N-90° N) and South (60° S-90° S) Polar; North (30° N-60° N) and South (30° S-60° S) Temperate; North (10° N-30° N) and South (10° S-30° S) Subtropical; Tropical(30° S-30° N); and Equatorial (10° S-10° N). The seasonal calculations are for the standard meteorological seasons (i.e., winter is defined as December, January, and February; spring is March, April, and May, etc.) and the annual calculations are for December through the following November (i.e., for the four meteorological seasons). For greater details, see Angell and Korshover (1983) and Angell (1988, 1991)

  13. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  14. MicroRNA Inhibitors as Anticancer Therapies

    DTIC Science & Technology

    2007-08-17

    Promoter activity was determined by co-transfection of the pGL3 promoter reporter (400 ng/well) with pRLSV40 ( Renilla luciferase, Promega)(100 ng/well) into...performed in triplicate and standard deviations calculated. Activitywas defined as Firefly/ Renilla ratio, normalized to control vector transfection. For...was defined as Firefly/ Renilla ratio normalized to activity in the absence of transfected E2F1. 5-RACE Mapping of Transcript—HEK-293 cells were tran

  15. The Data from Aeromechanics Test and Analytics -- Management and Analysis Package (DATAMAP). Volume I. User’s Manual.

    DTIC Science & Technology

    1980-12-01

    to sound pressure level in decibels assuming a fre- quency of 1000 Hz. 249 The perceived noisiness values are derived from a formula specified in...Analyses .......... 244 6.i.16 Perceived Noise Level Analysis .............249 6.1.17 Acoustic Weighting Networks ................250 6.2 DERIVATIONS...BAND ANALYSIS BASIC STATISTICAL ANALYSES: *OCTAVE ANALYSIS MEAN *THIRD OCTAVE ANALYSIS VARIANCE *PERCEIVED NOISE LEVEL STANDARD DEVIATION CALCULATION

  16. Visualizing the Sample Standard Deviation

    ERIC Educational Resources Information Center

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  17. Three-dimensional weight-accumulation algorithm for generating multiple excitation spots in fast optical stimulation

    NASA Astrophysics Data System (ADS)

    Takiguchi, Yu; Toyoda, Haruyoshi

    2017-11-01

    We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.

  18. Three-dimensional weight-accumulation algorithm for generating multiple excitation spots in fast optical stimulation

    NASA Astrophysics Data System (ADS)

    Takiguchi, Yu; Toyoda, Haruyoshi

    2018-06-01

    We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.

  19. Modeling the gas-phase thermochemistry of organosulfur compounds.

    PubMed

    Vandeputte, Aäron G; Sabbe, Maarten K; Reyniers, Marie-Françoise; Marin, Guy B

    2011-06-27

    Key to understanding the involvement of organosulfur compounds in a variety of radical chemistries, such as atmospheric chemistry, polymerization, pyrolysis, and so forth, is knowledge of their thermochemical properties. For organosulfur compounds and radicals, thermochemical data are, however, much less well documented than for hydrocarbons. The traditional recourse to the Benson group additivity method offers no solace since only a very limited number of group additivity values (GAVs) is available. In this work, CBS-QB3 calculations augmented with 1D hindered rotor corrections for 122 organosulfur compounds and 45 organosulfur radicals were used to derive 93 Benson group additivity values, 18 ring-strain corrections, 2 non-nearest-neighbor interactions, and 3 resonance corrections for standard enthalpies of formation, standard molar entropies, and heat capacities for organosulfur compounds and organosulfur radicals. The reported GAVs are consistent with previously reported GAVs for hydrocarbons and hydrocarbon radicals and include 77 contributions, among which 26 radical contributions, which, to the best of our knowledge, have not been reported before. The GAVs allow one to estimate the standard enthalpies of formation at 298 K, the standard entropies at 298 K, and standard heat capacities in the temperature range 300-1500 K for a large set of organosulfur compounds, that is, thiols, thioketons, polysulfides, alkylsulfides, thials, dithioates, and cyclic sulfur compounds. For a validation set of 26 organosulfur compounds, the mean absolute deviation between experimental and group additively modeled enthalpies of formation amounts to 1.9  kJ  mol(-1). For an additional set of 14 organosulfur compounds, it was shown that the mean absolute deviations between calculated and group additively modeled standard entropies and heat capacities are restricted to 4 and 2 J  mol(-1)  K(-1), respectively. As an alternative to Benson GAVs, 26 new hydrogen-bond increments are reported, which can also be useful for the prediction of radical thermochemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    PubMed

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  1. Longitudinal analysis on human cervical tissue using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Yao, Wang; Myers, Kristin M.; Vink, Joy-Sarah Y.; Wapner, Ronald J.; Hendon, Christine P.

    2017-02-01

    Uterine cervical collagen fiber network is vital to the normal cervical function in pregnancy. Previously, we presented an orientation estimation method to enable dispersion analysis on a single axial slice of human cervical tissue obtained from the upper half of cervix using optical coherence tomography (OCT). How the collagen fiber network structure changes from the internal os (top of the cervix which meets the uterus) to external os (bottom of cervix which extends into the vagina), remains unknown due to depth penetration limitations of OCT. To establish a collagen fiber directionality "map" of the entire cervix, we imaged serial axial slices of human NP (n=11) and PG (n=2) cervical tissue obtained from the internal to external os using Institutional Review Board approved protocols at Columbia University Medical Center. Each slice was divided into four quadrants. In each quadrant, we stitched multiple overlapped OCT volumes and analyzed the en face images that were parallel to the surface. A pixel-wise directionality map was generated. We analyzed fiber trend by measuring the mean angles and quantified dispersion by calculating the standard deviation of the fiber direction over a region of 400 μm × 400 μm. For the initial four samples, our analysis confirms a circumferential fiber pattern in the outer region of slices at all depths. We found that the standard deviation close to internal os showed no significance to the standard deviation close to external os (p>0.05), indicating comparable dispersion.

  2. Differences between genomic-based and pedigree-based relationships in a chicken population, as a function of quality control and pedigree links among individuals.

    PubMed

    Wang, H; Misztal, I; Legarra, A

    2014-12-01

    This work studied differences between expected (calculated from pedigree) and realized (genomic, from markers) relationships in a real population, the influence of quality control on these differences, and their fit to current theory. Data included 4940 pure line chickens across five generations genotyped for 57,636 SNP. Pedigrees (5762 animals) were available for the five generations, pedigree starting on the first one. Three levels of quality control were used. With no quality control, mean difference between realized and expected relationships for different type of relationships was ≤ 0.04 with standard deviation ≤ 0.10. With strong quality control (call rate ≥ 0.9, parent-progeny conflicts, minor allele frequency and use of only autosomal chromosomes), these numbers reduced to ≤ 0.02 and ≤ 0.04, respectively. While the maximum difference was 1.02 with the complete data, it was only 0.18 with the latest three generations of genotypes (but including all pedigrees). Variation of expected minus realized relationships agreed with theoretical developments and suggests an effective number of loci of 70 for this population. When the pedigree is complete and as deep as the genotypes, the standard deviation of difference between the expected and realized relationships is around 0.04, all categories confounded. Standard deviation of differences larger than 0.10 suggests bad quality control, mistakes in pedigree recording or genotype labelling, or insufficient depth of pedigree. © 2014 Blackwell Verlag GmbH.

  3. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  4. International collaborative study for the calibration of proposed International Standards for thromboplastin, rabbit, plain, and for thromboplastin, recombinant, human, plain.

    PubMed

    van den Besselaar, A M H P; Chantarangkul, V; Angeloni, F; Binder, N B; Byrne, M; Dauer, R; Gudmundsdottir, B R; Jespersen, J; Kitchen, S; Legnani, C; Lindahl, T L; Manning, R A; Martinuzzo, M; Panes, O; Pengo, V; Riddell, A; Subramanian, S; Szederjesi, A; Tantanate, C; Herbel, P; Tripodi, A

    2018-01-01

    Essentials Two candidate International Standards for thromboplastin (coded RBT/16 and rTF/16) are proposed. International Sensitivity Index (ISI) of proposed standards was assessed in a 20-centre study. The mean ISI for RBT/16 was 1.21 with a between-centre coefficient of variation of 4.6%. The mean ISI for rTF/16 was 1.11 with a between-centre coefficient of variation of 5.7%. Background The availability of International Standards for thromboplastin is essential for the calibration of routine reagents and hence the calculation of the International Normalized Ratio (INR). Stocks of the current Fourth International Standards are running low. Candidate replacement materials have been prepared. This article describes the calibration of the proposed Fifth International Standards for thromboplastin, rabbit, plain (coded RBT/16) and for thromboplastin, recombinant, human, plain (coded rTF/16). Methods An international collaborative study was carried out for the assignment of International Sensitivity Indexes (ISIs) to the candidate materials, according to the World Health Organization (WHO) guidelines for thromboplastins and plasma used to control oral anticoagulant therapy with vitamin K antagonists. Results Results were obtained from 20 laboratories. In several cases, deviations from the ISI calibration model were observed, but the average INR deviation attributabled to the model was not greater than 10%. Only valid ISI assessments were used to calculate the mean ISI for each candidate. The mean ISI for RBT/16 was 1.21 (between-laboratory coefficient of variation [CV]: 4.6%), and the mean ISI for rTF/16 was 1.11 (between-laboratory CV: 5.7%). Conclusions The between-laboratory variation of the ISI for candidate material RBT/16 was similar to that of the Fourth International Standard (RBT/05), and the between-laboratory variation of the ISI for candidate material rTF/16 was slightly higher than that of the Fourth International Standard (rTF/09). The candidate materials have been accepted by WHO as the Fifth International Standards for thromboplastin, rabbit plain, and thromboplastin, recombinant, human, plain. © 2017 International Society on Thrombosis and Haemostasis.

  5. Down-Looking Interferometer Study II, Volume I,

    DTIC Science & Technology

    1980-03-01

    g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system

  6. Development of a Smartphone-based reading system for lateral flow immunoassay.

    PubMed

    Lee, Sangdae; Kim, Giyoung; Moon, Jihea

    2014-11-01

    This study was conducted to develop and evaluate the performance of the Smartphone-based reading system for the lateral flow immunoassay (LFIA). Smartphone-based reading system consists of a Samsung Galaxy S2 Smartphone, Smartphone application, and a LFIA reader. LFIA reader is composed of the close-up lens with a focal length up to 30 mm, white LED light, lithium polymer battery, and main body. The Smartphone application for image acquisition and data analysis was developed on the Android platform. The standard curve was obtained by plotting the measured P(T)/P(c) or A(T)/A(c) ratio versus Salmonella standard concentration. The mean, standard deviation (SD), recovery, and relative standard deviation (RSD) were also calculated using additional experimental results. These data were compared with that obtained from the benchtop LFIA reader. The LOD in both systems was observed with 10(6) CFU/mL. The results show high accuracy and good reproducibility with a RSD less than 10% in the range of 10(6) to 10(9) CFU/mL. Due to the simple structure, good sensitivity, and high accuracy of the Smartphone-based reading system, this system can be substituted for the benchtop LFIA reader for point-of-care medical diagnostics.

  7. Influence of atypical retardation pattern on the peripapillary retinal nerve fibre distribution assessed by scanning laser polarimetry and optical coherence tomography.

    PubMed

    Schrems, W A; Laemmer, R; Hoesl, L M; Horn, F K; Mardin, C Y; Kruse, F E; Tornow, R P

    2011-10-01

    To investigate the influence of atypical retardation pattern (ARP) on the distribution of peripapillary retinal nerve fibre layer (RNFL) thickness measured with scanning laser polarimetry in healthy individuals and to compare these results with RNFL thickness from spectral domain optical coherence tomography (OCT) in the same subjects. 120 healthy subjects were investigated in this study. All volunteers received detailed ophthalmological examination, GDx variable corneal compensation (VCC) and Spectralis-OCT. The subjects were divided into four subgroups according to their typical scan score (TSS): very typical with TSS=100, typical with 99 ≥ TSS ≥ 91, less typical with 90 ≥ TSS ≥ 81 and atypical with TSS ≤ 80. Deviations from very typical normal values were calculated for 32 sectors for each group. There was a systematic variation of the RNFL thickness deviation around the optic nerve head in the atypical group for the GDxVCC results. The highest percentage deviation of about 96% appeared temporal with decreasing deviation towards the superior and inferior sectors, and nasal sectors exhibited a deviation of 30%. Percentage deviations from very typical RNFL values decreased with increasing TSS. No systematic variation could be found if the RNFL thickness deviation between different TSS-groups was compared with the OCT results. The ARP has a major impact on the peripapillary RNFL distribution assessed by GDx VCC; thus, the TSS should be included in the standard printout.

  8. Individual case photogrammetric calibration of the Hirschberg Ratio (HR) for corneal light reflection test strabometry.

    PubMed

    Romano, Paul E

    2006-01-01

    The HR (prism diopters [PD] per mm of corneal light reflection test [CLRT] asymmetry for strabometry) varies in humans from 14 to 24 PD/mm, but is totally unpredictable. Photo(grammetric) HR calibration in (of) each case facilitates acceptable strabometry precision and accuracy. Take 3 flash photos of the patient with both the preferred eye and then the deviating eye fixating straight ahead and then again with the deviation eye fixing at (+/-5-10 PD) the strabismic angle on a metric rule (stick) one meter away from the camera lens (where 1 cm = 1 PD). On these 3 photos, make four precise measurements of the position of the CLR with reference to the limbus: In the deviating eye fixing straight ahead and fixating at the angle of deviation. Divide the mm difference in location into the change in the angle of fixation to determine the HR for this patient at this angle. Then determine the CLR position in both the deviating eye and the fixing eye in the straight ahead primary position picture. Apply the calculated calibrated HR to the asymmetry of the CLRs in primary position to determine the true strabismic deviation. This imaging method insures accurate Hirschberg CLRT strabometry in each case, determining the deviation in "free space", under conditions of normal binocular viewing, uncontaminated by the artifacts or inaccuracies of other conventional strabometric methods or devices. So performed, the Hirschberg CLRT is the gold standard of strabometry.

  9. Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing

    NASA Technical Reports Server (NTRS)

    Levine, I.

    1981-01-01

    A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.

  10. Off disk-center potential field calculations using vector magnetograms

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, P.; Gary, G. Allen

    1989-01-01

    A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.

  11. Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation

    USGS Publications Warehouse

    Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.

    2000-01-01

    We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.

  12. Flexner 2.0-Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona.

    PubMed

    Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.

  13. Flexner 2.0—Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona

    PubMed Central

    Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783

  14. Investigating adsorption/desorption of carbon dioxide in aluminum compressed gas cylinders.

    PubMed

    Miller, Walter R; Rhoderick, George C; Guenther, Franklin R

    2015-02-03

    Between June 2010 and June 2011, the National Institute of Standards and Technology (NIST) gravimetrically prepared a suite of 20 carbon dioxide (CO2) in air primary standard mixtures (PSMs). Ambient mole fraction levels were obtained through six levels of dilution beginning with pure (99.999%) CO2. The sixth level covered the ambient range from 355 to 404 μmol/mol. This level will be used to certify cylinder mixtures of compressed dry whole air from both the northern and southern hemispheres as NIST standard reference materials (SRMs). The first five levels of PSMs were verified against existing PSMs in a balance of air or nitrogen with excellent agreement observed (the average percent difference between the calculated and analyzed values was 0.002%). After the preparation of a new suite of PSMs at ambient level, they were compared to an existing suite of PSMs. It was observed that the analyzed concentration of the new PSMs was less than the calculated gravimetric concentration by as much as 0.3% relative. The existing PSMs had been used in a Consultative Committee for Amount of Substance-Metrology in Chemistry Key Comparison (K-52) in which there was excellent agreement (the NIST-analyzed value was -0.09% different from the calculated value, while the average of the difference for all 18 participants was -0.10%) with those of other National Metrology Institutes and World Meteorological Organization designated laboratories. In order to determine the magnitude of these losses at the ambient level, a series of "daughter/mother" tests were initiated and conducted in which the gas mixture containing CO2 from a "mother" cylinder was transferred into an evacuated "daughter" cylinder. These cylinder pairs were then compared using cavity ring-down spectroscopy under high reproducibility conditions (the average percent relative standard deviation of sample response was 0.02). A ratio of the daughter instrument response to the mother response was calculated, with the resultant deviation from unity being a measure of the CO2 loss or gain. Cylinders from three specialty gas vendors were tested to find the appropriate cylinder in which to prepare the new PSMs. All cylinders tested showed a loss of CO2, presumably to the walls of the cylinder. The vendor cylinders exhibiting the least loss of CO2 were then purchased to be used to gravimetrically prepare the PSMs, adjusting the calculated mole fraction for the loss bias and an uncertainty calculated from this work.

  15. A method to estimate statistical errors of properties derived from charge-density modelling

    PubMed Central

    Lecomte, Claude

    2018-01-01

    Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964

  16. Plume particle collection and sizing from static firing of solid rocket motors

    NASA Technical Reports Server (NTRS)

    Sambamurthi, Jay K.

    1995-01-01

    A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.

  17. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  18. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  19. A height-for-age growth reference for children with achondroplasia: Expanded applications and comparison with original reference data.

    PubMed

    Hoover-Fong, Julie; McGready, John; Schulze, Kerry; Alade, Adekemi Yewande; Scott, Charles I

    2017-05-01

    The height-for-age (HA) reference currently used for children with achondroplasia is not adaptable for electronic records or calculation of HA Z-scores. We report new HA curves and tables of mean and standard deviation (SD) HA, for calculating Z-scores, from birth-16 years in achondroplasia. Mixed longitudinal data were abstracted from medical records of achondroplasia patients from a single clinical practice (CIS, 1967-2004). Gender-specific height percentiles (5, 25, 50, 75, 95th) were estimated across the age continuum, using a 2 month window per time point smoothed by a quadratic smoothing algorithm. HA curves were constructed for 0-36 months and 2-16 years to optimize resolution for younger children. Mean monthly height (SD) was tabulated. These novel HA curves were compared to reference data currently in use for children with achondroplasia. 293 subjects (162 male/131 female) contributed 1,005 and 932 height measures, with greater data paucity with age. Mean HA tracked with original achondroplasia norms, particularly through mid-childhood (2-9 years), but with no evidence of a pubertal growth spurt. Standard deviation of height at each month interval increased from birth through 16 years. Birth length was lower in achondroplasia than average stature and, as expected, height deficits increased with age. A new HA reference is available for longitudinal growth assessment in achondroplasia, taking advantage of statistical modeling techniques and allowing for Z-score calculations. This is an important contribution to clinical care and research endeavors for the achondroplasia population. © 2017 Wiley Periodicals, Inc.

  20. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  1. Reference standard space hippocampus labels according to the European Alzheimer's Disease Consortium-Alzheimer's Disease Neuroimaging Initiative harmonized protocol: Utility in automated volumetry.

    PubMed

    Wolf, Dominik; Bocchetta, Martina; Preboske, Gregory M; Boccardi, Marina; Grothe, Michel J

    2017-08-01

    A harmonized protocol (HarP) for manual hippocampal segmentation on magnetic resonance imaging (MRI) has recently been developed by an international European Alzheimer's Disease Consortium-Alzheimer's Disease Neuroimaging Initiative project. We aimed at providing consensual certified HarP hippocampal labels in Montreal Neurological Institute (MNI) standard space to serve as reference in automated image analyses. Manual HarP tracings on the high-resolution MNI152 standard space template of four expert certified HarP tracers were combined to obtain consensual bilateral hippocampus labels. Utility and validity of these reference labels is demonstrated in a simple atlas-based morphometry approach for automated calculation of HarP-compliant hippocampal volumes within SPM software. Individual tracings showed very high agreement among the four expert tracers (pairwise Jaccard indices 0.82-0.87). Automatically calculated hippocampal volumes were highly correlated (r L/R  = 0.89/0.91) with gold standard volumes in the HarP benchmark data set (N = 135 MRIs), with a mean volume difference of 9% (standard deviation 7%). The consensual HarP hippocampus labels in the MNI152 template can serve as a reference standard for automated image analyses involving MNI standard space normalization. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  2. Dynamics of the minority game for patients

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Yoon, Seong-Min; Kul Yum, Myung

    2004-12-01

    We analyze the minority game for patients, and the results known from the minority game are applied to the patient problem consulted at the department of pediatric cardiology. We find numerically the standard deviation and the global efficiency, which is discussed similar to the El Farol bar problem. After the score equation and the scaled utility are introduced, the dynamical behavior of our model is discussed for particular strategies. Our results presented will be compared with recent numerical calculations.

  3. Numerical evaluation of moiré pattern in touch sensor module with electrode mesh structure in oblique view

    NASA Astrophysics Data System (ADS)

    Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.

    2016-03-01

    Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.

  4. Reference Values for Human Posture Measurements Based on Computerized Photogrammetry: A Systematic Review.

    PubMed

    Macedo Ribeiro, Ana Freire; Bergmann, Anke; Lemos, Thiago; Pacheco, Antônio Guilherme; Mello Russo, Maitê; Santos de Oliveira, Laura Alice; de Carvalho Rodrigues, Erika

    The main objective of this study was to review the literature to identify reference values for angles and distances of body segments related to upright posture in healthy adult women with the Postural Assessment Software (PAS/SAPO). Electronic databases (BVS, PubMed, SciELO and Scopus) were assessed using the following descriptors: evaluation, posture, photogrammetry, physical therapy, postural alignment, postural assessment, and physiotherapy. Studies that performed postural evaluation in healthy adult women with PAS/SAPO and were published in English, Portuguese and Spanish, between the years 2005 and 2014 were included. Four studies met the inclusion criteria. Data from the included studies were grouped to establish the statistical descriptors (mean, variance, and standard deviation) of the body angles and distances. A total of 29 variables were assessed (10 in the anterior views, 16 in the lateral right and left views, and 3 in the posterior views), and its respective mean and standard deviation were calculated. Reference values for the anterior and posterior views showed no symmetry between the right and left sides of the body in the frontal plane. There were also small differences in the calculated reference values for the lateral view. The proposed reference values for quantitative evaluation of the upright posture in healthy adult women estimated in the present study using PAS/SAPO could guide future studies and help clinical practice. Copyright © 2017. Published by Elsevier Inc.

  5. The application of laser triangulation method on the blind guidance

    NASA Astrophysics Data System (ADS)

    Wu, Jih-Huah; Wang, Jinn-Der; Fang, Wei; Shan, Yi-Chia; Ma, Shih-Hsin; Kao, Hai-Ko; Jiang, Joe-Air; Lee, Yun-Parn

    2011-08-01

    A new apparatus for blind-guide is proposed in this paper. Optical triangulation method was used to realize the system. The main components comprise a notebook computer, a camera and two laser modules. One laser module emits a light line beam on the vertical axis. Another laser module emits a light line beam on the tilt horizontal axis. The track of the light line beam on the ground or on the object is captured by the camera, and the image is sent to the notebook computer for calculation. The system can calculate the object width and the distance between the object and the blind in terms of the light line positions on the image. Based on the experiment, the distance between the test object and the blind can be measured with a standard deviation of less than 3% within the range of 60 to 150 cm. The test object width can be measured with a standard deviation of less than 1% within the range of 60 to 150 cm. For saving the power consumption, the laser modules are switched on/off with a trigger pulse. And for reducing the complex computation, the two laser modules are switched on alternately. Besides this, a band pass filter is used to filter out the signal except the specific laser light, which can increase the signal to noise ratio.

  6. Poster — Thur Eve — 13: Inter-Fraction Target Movement in Image-Guided Radiation Therapy of Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Congwu; Zeng, Grace G.; Department of Radiation Oncology, University of Toronto, Toronto, ON

    2014-08-15

    We investigated the setup variations over the treatment courses of 113 patients with intact prostate treated with 78Gy/39fx. Institutional standard bladder and bowel preparation and image guidance protocols were used in CT simulation and treatment. The RapidArc treatment plans were optimized in Varian Eclipse treatment planning system and delivered on Varian 2100X Clinacs equipped with On-Board Imager to localize the target before beam-on. The setup variations were calculated in terms of mean and standard deviation of couch shifts. No correlation was observed between the mean shift and standard deviation over the treatment course and patient age, initial prostate volume andmore » rectum size. The mean shifts in the first and last 5 fractions are highly correlated (P < 10{sup −10}) while the correlation of the standard deviations cannot be determined. The Mann-Kendall tests indicate trends of the mean daily Ant-Post and Sup-Inf shifts of the group. The target is inferior by ∼1mm to the planned position when the treatment starts and moves superiorly, approaching the planned position at 10th fraction, and then gradually moves back inferiorly by ∼1mm in the remain fractions. In the Ant-Post direction, the prostate gradually moves posteriorly during the treatment course from a mean shift of ∼2.5mm in the first fraction to ∼1mm in the last fraction. It may be related to a systematic rectum size change in the progress of treatment. The biased mean shifts in Ant-Post and Sup-Inf direction of most patients suggest systematically larger rectum and smaller bladder during the treatment than at CT simulation.« less

  7. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    PubMed

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  8. ENDF/B-VII.0 Data Testing Using 1,172 Critical Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechaty, E F; Cullen, D E

    2007-10-01

    In order to test the ENDF/B-VII.0 neutron data library [1], 1,172 critical assemblies from [2] have been calculated using the Monte Carlo transport code TART [3]. TART's 'best' physics was used for all of these calculations; this included continuous energy cross sections, delayed neutrons in their spectrum that is slower than prompt neutrons, unresolved resonance region self-shielding, the thermal scattering (free atom for all materials plus thermal scattering law data S({alpha},{beta}) when available). In this first pass through the assemblies the objective was to 'quickly' test the validity of the ENDF/B-VII.0 data [1], the assembly models as defined in [2]more » and coded for use with TART, and TART's physics treatment [3] of these assemblies. With TART we have the option of running criticality problems until K-eff has been calculated to an acceptable input accuracy. In order to 'quickly' calculate all of these assemblies K-eff was calculated in each case to +/- 0.002. For these calculations the assemblies were divided into ten types based on fuel (mixed, Pu239, U233, U235) and median fission energy (Fast, Midi, Slow). A table is provided that shows a summary of these results. This is followed be details for every assembly, and statistical information about the distribution of K-eff for each type of assembly. After a review of these results to eliminate any obvious errors in ENDF/B data, assembly models, or TART physics, all assemblies will be run again to a higher precision. Only after this second run is finished will we have highly precise results. Until then the results presently here should only be interpreted as approximate values of K-eff with a standard deviation of +/- 0.002; for such a large number of assemblies we expected the results to be approximately normal, with a spread out to several times the standard deviation; see the calculated statistical distributions and their comparisons to a normal distribution.« less

  9. A Note on Standard Deviation and Standard Error

    ERIC Educational Resources Information Center

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  10. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  11. [ELEMENTS OF A SYSTEMATIC APPROACH TO HYGIENIC REGULATION OF XENOBIOTICS].

    PubMed

    Shtabskiy, B M; Gzhegotskiy, M R; Shafran, L M

    2016-01-01

    Hygienic standardization (HS) of chemicals remains to be the one of the effective ways to ensure chemical safety of the population. At that hygienic standards (such as maximum allowable concentrations--MACs) are interrelated and aggregated into the coherent systems. Therefore, the task of the study was in establishment of the logic of inter- standard relations between the existing standards and actualization of legitimate relations of the interrelations such as MACwz/MACatm, (i.e., to systematize standards) and so as CL₅₀/MACwz (reflecting the ratio of reliability). In the suggested systemic approach the benchmark indices of the proposed HS system are the values of the MACwz. Standards for other media, including atmosphere air may be only some compartments of MACwz. The performed studies and calculations allowed to justify and implement the system approach into the practice of HS in Ukraine. There is need for further search for additional solutions in nonreachability of LC₅₀ in the experiment, justification of standards for the population in the absence of MACwz, comparison with the data of normative databases of other countries. It is necessary to introduce the value of permissible deviation from the requirements of the systemness, to embody conditions (1)-(7) into the general principle of the prohibition of greater deviation and to harmonize acting and newly introduced standards within frameworks of modern ideology and methods of HS of harmful substances. This opens up broad prospects for the new phase of HS and a significant increase in the reliability of results obtained by the various methods and in different laboratories.

  12. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  13. Couch height–based patient setup for abdominal radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohira, Shingo; Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita; Ueda, Yoshihiro

    2016-04-01

    There are 2 methods commonly used for patient positioning in the anterior-posterior (A-P) direction: one is the skin mark patient setup method (SMPS) and the other is the couch height–based patient setup method (CHPS). This study compared the setup accuracy of these 2 methods for abdominal radiation therapy. The enrollment for this study comprised 23 patients with pancreatic cancer. For treatments (539 sessions), patients were set up by using isocenter skin marks and thereafter treatment couch was shifted so that the distance between the isocenter and the upper side of the treatment couch was equal to that indicated on themore » computed tomographic (CT) image. Setup deviation in the A-P direction for CHPS was measured by matching the spine of the digitally reconstructed radiograph (DRR) of a lateral beam at simulation with that of the corresponding time-integrated electronic portal image. For SMPS with no correction (SMPS/NC), setup deviation was calculated based on the couch-level difference between SMPS and CHPS. SMPS/NC was corrected using 2 off-line correction protocols: no action level (SMPS/NAL) and extended NAL (SMPS/eNAL) protocols. Margins to compensate for deviations were calculated using the Stroom formula. A-P deviation > 5 mm was observed in 17% of SMPS/NC, 4% of SMPS/NAL, and 4% of SMPS/eNAL sessions but only in one CHPS session. For SMPS/NC, 7 patients (30%) showed deviations at an increasing rate of > 0.1 mm/fraction, but for CHPS, no such trend was observed. The standard deviations (SDs) of systematic error (Σ) were 2.6, 1.4, 0.6, and 0.8 mm and the root mean squares of random error (σ) were 2.1, 2.6, 2.7, and 0.9 mm for SMPS/NC, SMPS/NAL, SMPS/eNAL, and CHPS, respectively. Margins to compensate for the deviations were wide for SMPS/NC (6.7 mm), smaller for SMPS/NAL (4.6 mm) and SMPS/eNAL (3.1 mm), and smallest for CHPS (2.2 mm). Achieving better setup with smaller margins, CHPS appears to be a reproducible method for abdominal patient setup.« less

  14. Evaluation of the Eclipse eMC algorithm for bolus electron conformal therapy using a standard verification dataset.

    PubMed

    Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A

    2016-05-08

    The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.

  15. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...

  16. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  17. Monitor unit settings for intensity modulated beams delivered using a step-and-shoot approach.

    PubMed

    Sharpe, M B; Miller, B M; Yan, D; Wong, J W

    2000-12-01

    Two linear accelerators have been commissioned for delivering IMRT treatments using a step-and-shoot approach. To assess beam startup stability for 6 and 18 MV x-ray beams, dose delivered per monitor unit (MU), beam flatness, and beam symmetry were measured as a function of the total number of MU delivered at a clinical dose rate of 400 MU per minute. Relative to a 100 MU exposure, the dose delivered per MU by both linear accelerators was found to be within +/-2% for exposures larger than 4 MU. Beam flatness and symmetry also met accepted quality assurance standards for a minimum exposure of 4 MU. We have found that the performance of the two machines under study is well suited to the delivery of step-and-shoot IMRT. A system of dose calculation has also been commissioned for applying head scatter corrections to fields as small as 1x1 cm2. The accuracy and precision of the relative output calculations in water was validated for small fields and fields offset from the axis of collimator rotation. For both 6 and 18 MV x-ray beams, the dose per MU calculated in a water phantom agrees with measured data to within 1% on average, with a maximum deviation of 2.5%. The largest output factor discrepancies were seen when the actual radiation field size deviated from the set field size. The measured output in water can vary by as much 16% for 1x1 cm2 fields, when the measured field size deviates from the set field size by 2 mm. For a 1 mm deviation, this discrepancy was reduced to 8%. Steps should be taken to ensure collimator precision is tightly controlled when using such small fields. If this is not possible, very small fields should not contribute to a significant portion of the treatment, or uncertainties in the collimator position may effect the accuracy of the dose delivered.

  18. Improving the quality of child anthropometry: Manual anthropometry in the Body Imaging for Nutritional Assessment Study (BINA)

    PubMed Central

    2017-01-01

    Anthropometric data collected in clinics and surveys are often inaccurate and unreliable due to measurement error. The Body Imaging for Nutritional Assessment Study (BINA) evaluated the ability of 3D imaging to correctly measure stature, head circumference (HC) and arm circumference (MUAC) for children under five years of age. This paper describes the protocol for and the quality of manual anthropometric measurements in BINA, a study conducted in 2016–17 in Atlanta, USA. Quality was evaluated by examining digit preference, biological plausibility of z-scores, z-score standard deviations, and reliability. We calculated z-scores and analyzed plausibility based on the 2006 WHO Child Growth Standards (CGS). For reliability, we calculated intra- and inter-observer Technical Error of Measurement (TEM) and Intraclass Correlation Coefficient (ICC). We found low digit preference; 99.6% of z-scores were biologically plausible, with z-score standard deviations ranging from 0.92 to 1.07. Total TEM was 0.40 for stature, 0.28 for HC, and 0.25 for MUAC in centimeters. ICC ranged from 0.99 to 1.00. The quality of manual measurements in BINA was high and similar to that of the anthropometric data used to develop the WHO CGS. We attributed high quality to vigorous training, motivated and competent field staff, reduction of non-measurement error through the use of technology, and reduction of measurement error through adequate monitoring and supervision. Our anthropometry measurement protocol, which builds on and improves upon the protocol used for the WHO CGS, can be used to improve anthropometric data quality. The discussion illustrates the need to standardize anthropometric data quality assessment, and we conclude that BINA can provide a valuable evaluation of 3D imaging for child anthropometry because there is comparison to gold-standard, manual measurements. PMID:29240796

  19. Comment on ‘Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy 125I and 192Ir sources and 60Co cell irradiation’

    NASA Astrophysics Data System (ADS)

    Lindborg, Lennart; Lillhök, Jan; Grindborg, Jan-Erik

    2015-11-01

    The relative standard deviation, σr,D, of calculated multi-event distributions of specific energy for 60Co ϒ rays was reported by the authors F Villegas, N Tilly and A Ahnesjö (Phys. Med. Biol. 58 6149-62). The calculations were made with an upgraded version of the Monte Carlo code PENELOPE. When the results were compared to results derived from experiments with the variance method and simulated tissue equivalent volumes in the micrometre range a difference of about 50% was found. Villegas et al suggest wall-effects as the likely explanation for the difference. In this comment we review some publications on wall-effects and conclude that wall-effects are not a likely explanation.

  20. Comment on 'Monte Carlo calculated microdosimetric spread for cell nucleus-sized targets exposed to brachytherapy (125)I and (192)Ir sources and (60)Co cell irradiation'.

    PubMed

    Lindborg, Lennart; Lillhök, Jan; Grindborg, Jan-Erik

    2015-11-07

    The relative standard deviation, σr,D, of calculated multi-event distributions of specific energy for (60)Co ϒ rays was reported by the authors F Villegas, N Tilly and A Ahnesjö (Phys. Med. Biol. 58 6149-62). The calculations were made with an upgraded version of the Monte Carlo code PENELOPE. When the results were compared to results derived from experiments with the variance method and simulated tissue equivalent volumes in the micrometre range a difference of about 50% was found. Villegas et al suggest wall-effects as the likely explanation for the difference. In this comment we review some publications on wall-effects and conclude that wall-effects are not a likely explanation.

  1. Cosmological power spectrum in a noncommutative spacetime

    NASA Astrophysics Data System (ADS)

    Kothari, Rahul; Rath, Pranati K.; Jain, Pankaj

    2016-09-01

    We propose a generalized star product that deviates from the standard one when the fields are considered at different spacetime points by introducing a form factor in the standard star product. We also introduce a recursive definition by which we calculate the explicit form of the generalized star product at any number of spacetime points. We show that our generalized star product is associative and cyclic at linear order. As a special case, we demonstrate that our recursive approach can be used to prove the associativity of standard star products for same or different spacetime points. The introduction of a form factor has no effect on the standard Lagrangian density in a noncommutative spacetime because it reduces to the standard star product when spacetime points become the same. We show that the generalized star product leads to physically consistent results and can fit the observed data on hemispherical anisotropy in the cosmic microwave background radiation.

  2. A better norm-referenced grading using the standard deviation criterion.

    PubMed

    Chan, Wing-shing

    2014-01-01

    The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.

  3. Personal Background Preparation Survey for early identification of nursing students at risk for attrition.

    PubMed

    Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C

    2009-11-01

    During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.

  4. Demonstration of the Gore Module for Passive Ground Water Sampling

    DTIC Science & Technology

    2014-06-01

    ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70

  5. Impact of baseline systolic blood pressure on visit-to-visit blood pressure variability: the Kailuan study.

    PubMed

    Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling

    2016-01-01

    To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.

  6. Flexner 3.0-Democratization of Medical Knowledge for the 21st Century: Teaching Medical Science Using K-12 General Pathology as a Gateway Course.

    PubMed

    Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J

    2016-01-01

    A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."

  7. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    PubMed

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.

  8. Flexner 3.0—Democratization of Medical Knowledge for the 21st Century

    PubMed Central

    Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.

    2016-01-01

    A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762

  9. Determination of absorption changes from moments of distributions of times of flight of photons: optimization of measurement conditions for a two-layered tissue model.

    PubMed

    Liebert, Adam; Wabnitz, Heidrun; Elster, Clemens

    2012-05-01

    Time-resolved near-infrared spectroscopy allows for depth-selective determination of absorption changes in the adult human head that facilitates separation between cerebral and extra-cerebral responses to brain activation. The aim of the present work is to analyze which combinations of moments of measured distributions of times of flight (DTOF) of photons and source-detector separations are optimal for the reconstruction of absorption changes in a two-layered tissue model corresponding to extra- and intra-cerebral compartments. To this end we calculated the standard deviations of the derived absorption changes in both layers by considering photon noise and a linear relation between the absorption changes and the DTOF moments. The results show that the standard deviation of the absorption change in the deeper (superficial) layer increases (decreases) with the thickness of the superficial layer. It is confirmed that for the deeper layer the use of higher moments, in particular the variance of the DTOF, leads to an improvement. For example, when measurements at four different source-detector separations between 8 and 35 mm are available and a realistic thickness of the upper layer of 12 mm is assumed, the inclusion of the change in mean time of flight, in addition to the change in attenuation, leads to a reduction of the standard deviation of the absorption change in the deeper tissue layer by a factor of 2.5. A reduction by another 4% can be achieved by additionally including the change in variance.

  10. Impact of exercise on diurnal and nocturnal markers of glycaemic variability and oxidative stress in obese individuals with type 2 diabetes or impaired glucose tolerance.

    PubMed

    Farabi, Sarah S; Carley, David W; Smith, Donald; Quinn, Lauretta

    2015-09-01

    We measured the effects of a single bout of exercise on diurnal and nocturnal oxidative stress and glycaemic variability in obese subjects with type 2 diabetes mellitus or impaired glucose tolerance versus obese healthy controls. Subjects (in random order) performed either a single 30-min bout of moderate-intensity exercise or remained sedentary for 30 min at two separate visits. To quantify glycaemic variability, standard deviation of glucose (measured by continuous glucose monitoring system) and continuous overlapping net glycaemic action of 1-h intervals (CONGA-1) were calculated for three 12-h intervals during each visit. Oxidative stress was measured by 15-isoprostane F(2t) levels in urine collections for matching 12-h intervals. Exercise reduced daytime glycaemic variability (ΔCONGA-1 = -12.62 ± 5.31 mg/dL, p = 0.04) and urinary isoprostanes (ΔCONGA-1 = -0.26 ± 0.12 ng/mg, p = 0.04) in the type 2 diabetes mellitus/impaired glucose tolerance group. Daytime exercise-induced change in urinary 15-isoprostane F(2t) was significantly correlated with both daytime standard deviation (r = 0.68, p = 0.03) and with subsequent overnight standard deviation (r = 0.73, p = 0.027) in the type 2 diabetes mellitus/impaired glucose tolerance group. Exercise significantly impacts the relationship between diurnal oxidative stress and nocturnal glycaemic variability in individuals with type 2 diabetes mellitus/impaired glucose tolerance. © The Author(s) 2015.

  11. The biologic error in gestational length related to the use of the first day of last menstrual period as a proxy for the start of pregnancy.

    PubMed

    Nakling, Jakob; Buhaug, Harald; Backe, Bjorn

    2005-10-01

    In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.

  12. Gait coordination in pregnancy: transverse pelvic and thoracic rotations and their relative phase.

    PubMed

    Wu, Wenhua; Meijer, Onno G; Lamoth, Claudine J C; Uegaki, Kimi; van Dieën, Jaap H; Wuisman, Paul I J M; de Vries, Johanna I P; Beek, Peter J

    2004-06-01

    To examine the effects of pregnancy on the coordination of transverse pelvic and thoracic rotations during gait. Gait of healthy pregnant women and nulligravidae was studied during treadmill walking at predetermined velocities ranging from 0.17 to 1.72 m/s. pelvis-thorax coordination during walking is altered in women with postpartum pregnancy-related pelvic girdle pain. This coordination has not been investigated in a healthy pregnant population. Comfortable walking velocity was established. Amplitudes of pelvic and thoracic rotations were calculated. Their coordination was characterized by relative Fourier phase and its standard deviation. Comfortable walking velocity was significantly reduced. The amplitudes of pelvic and thoracic rotations were somewhat reduced, with significantly smaller intra-individual standard deviations. Also pelvis-thorax relative Fourier phase was somewhat smaller, its intra-individual standard deviation was negatively correlated with week of pregnancy, and significantly lower at velocities > or = 1.06 m/s. The general pattern of gait kinematics in pregnant women is very similar to that of nulligravidae. Still, it appears that pregnant women experience difficulties in realizing the more anti-phase pelvis-thorax coordination that is required at higher walking velocities. The present study shows that gait in healthy pregnancy is remarkably normal, but some differences in pelvis-thorax coordination were detected. In healthy pregnancy, anti-phase pelvis-thorax coordination appears difficult, but less so than in pregnancy-related pelvic girdle pain. Better understanding of gait in healthy pregnancy may provide insight into the gait problems of women with pregnancy-related pelvic girdle pain. Copyright 2004 Elsevier Ltd.

  13. Evaluation of Two New Indices of Blood Pressure Variability Using Postural Change in Older Fallers.

    PubMed

    Goh, Choon-Hian; Ng, Siew-Cheok; Kamaruzzaman, Shahrul B; Chin, Ai-Vyrn; Poi, Philip J H; Chee, Kok Han; Imran, Z Abidin; Tan, Maw Pin

    2016-05-01

    To evaluate the utility of blood pressure variability (BPV) calculated using previously published and newly introduced indices using the variables falls and age as comparators.While postural hypotension has long been considered a risk factor for falls, there is currently no documented evidence on the relationship between BPV and falls.A case-controlled study involving 25 fallers and 25 nonfallers was conducted. Systolic (SBPV) and diastolic blood pressure variability (DBPV) were assessed using 5 indices: standard deviation (SD), standard deviation of most stable continuous 120 beats (staSD), average real variability (ARV), root mean square of real variability (RMSRV), and standard deviation of real variability (SDRV). Continuous beat-to-beat blood pressure was recorded during 10 minutes' supine rest and 3 minutes' standing.Standing SBPV was significantly higher than supine SBPV using 4 indices in both groups. The standing-to-supine-BPV ratio (SSR) was then computed for each subject (staSD, ARV, RMSRV, and SDRV). Standing-to-supine ratio for SBPV was significantly higher among fallers compared to nonfallers using RMSRV and SDRV (P = 0.034 and P = 0.025). Using linear discriminant analysis (LDA), 3 indices (ARV, RMSRV, and SDRV) of SSR SBPV provided accuracies of 61.6%, 61.2%, and 60.0% for the prediction of falls which is comparable with timed-up and go (TUG), 64.4%.This study suggests that SSR SBPV using RMSRV and SDRV is a potential predictor for falls among older patients, and deserves further evaluation in larger prospective studies.

  14. Site-specific 13C content by quantitative isotopic 13C nuclear magnetic resonance spectrometry: a pilot inter-laboratory study.

    PubMed

    Chaintreau, Alain; Fieber, Wolfgang; Sommer, Horst; Gilbert, Alexis; Yamada, Keita; Yoshida, Naohiro; Pagelot, Alain; Moskau, Detlef; Moreno, Aitor; Schleucher, Jürgen; Reniero, Fabiano; Holland, Margaret; Guillou, Claude; Silvestre, Virginie; Akoka, Serge; Remaud, Gérald S

    2013-07-25

    Isotopic (13)C NMR spectrometry, which is able to measure intra-molecular (13)C composition, is of emerging demand because of the new information provided by the (13)C site-specific content of a given molecule. A systematic evaluation of instrumental behaviour is of importance to envisage isotopic (13)C NMR as a routine tool. This paper describes the first collaborative study of intra-molecular (13)C composition by NMR. The main goals of the ring test were to establish intra- and inter-variability of the spectrometer response. Eight instruments with different configuration were retained for the exercise on the basis of a qualification test. Reproducibility at the natural abundance of isotopic (13)C NMR was then assessed on vanillin from three different origins associated with specific δ (13)Ci profiles. The standard deviation was, on average, between 0.9 and 1.2‰ for intra-variability. The highest standard deviation for inter-variability was 2.1‰. This is significantly higher than the internal precision but could be considered good in respect of a first ring test on a new analytical method. The standard deviation of δ (13)Ci in vanillin was not homogeneous over the eight carbons, with no trend either for the carbon position or for the configuration of the spectrometer. However, since the repeatability for each instrument was satisfactory, correction factors for each carbon in vanillin could be calculated to harmonize the results. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  16. Estimation of the neural drive to the muscle from surface electromyograms

    NASA Astrophysics Data System (ADS)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  17. THE DIFFUSION LENGTH OF THERMAL NEUTRONS IN PORTLAND CONCRETE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dugdale, R.A.; Healy, E.

    1957-10-01

    A measurement of the diffusion length of thermal neutrons in Portland concrete, originally raade by Salmon two years previously, has been repeated. An apparent decrease from 7.04 cm to 6.61 cm has oocurred. This change, which is only four times the standard deviation of the result, could be due to a small increase in water content. In assessing the amount required, a discrepancy between calculated and measured diffusion length was found. Possible explanations of the discrepancy are discussed. (auth)

  18. Local fluctuations of ozone from 16 km to 45 km deduced from in situ vertical ozone profile

    NASA Technical Reports Server (NTRS)

    Moreau, G.; Robert, C.

    1994-01-01

    A vertical ozone profile obtained by an in situ ozone sonde from 16 km to 45 km, has allowed to observe local ozone concentration variations. These variations can be observed, thanks to a fast measurement system based on a UV absorption KrF excimer laser beam in a multipass cell. Ozone standard deviation versus altitude calculated from the mean is derived. Ozone variations or fluctuations are correlated with the different dynamic zones of the stratosphere.

  19. Trends in New U.S. Marine Corps Accessions During the Recent Conflicts in Iraq and Afghanistan

    DTIC Science & Technology

    2014-01-01

    modest changes over the study period. Favorable trends included recent (2009-2010) improvernents in body mass index and physical activity levels...height, body mass index (BMI) in kg/m^ was calculated. Frequency of physical activity before service entry was assessed from self-report. Initial run...Test; BMI, body mass index; mph, miles per hour; SD, standard deviation. "Numbers (n) may not add up to 131,961 because of missing self-reported data for

  20. Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.

    PubMed

    Bowman, Richard G; Caraway, David; Bentley, Ishmael

    2013-01-01

    Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.

  1. Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study.

    PubMed

    Abraha, Iosief; Cherubini, Antonio; Cozzolino, Francesco; De Florio, Rita; Luchetta, Maria Laura; Rimland, Joseph M; Folletti, Ilenia; Marchesi, Mauro; Germani, Antonella; Orso, Massimiliano; Eusebi, Paolo; Montedori, Alessandro

    2015-05-27

    To examine whether deviation from the standard intention to treat analysis has an influence on treatment effect estimates of randomised trials. Meta-epidemiological study. Medline, via PubMed, searched between 2006 and 2010; 43 systematic reviews of interventions and 310 randomised trials were included. From each year searched, random selection of 5% of intervention reviews with a meta-analysis that included at least one trial that deviated from the standard intention to treat approach. Basic characteristics of the systematic reviews and randomised trials were extracted. Information on the reporting of intention to treat analysis, outcome data, risk of bias items, post-randomisation exclusions, and funding were extracted from each trial. Trials were classified as: ITT (reporting the standard intention to treat approach), mITT (reporting a deviation from the standard approach), and no ITT (reporting no approach). Within each meta-analysis, treatment effects were compared between mITT and ITT trials, and between mITT and no ITT trials. The ratio of odds ratios was calculated (value <1 indicated larger treatment effects in mITT trials than in other trial categories). 50 meta-analyses and 322 comparisons of randomised trials (from 84 ITT trials, 118 mITT trials, and 108 no ITT trials; 12 trials contributed twice to the analysis) were examined. Compared with ITT trials, mITT trials showed a larger intervention effect (pooled ratio of odds ratios 0.83 (95% confidence interval 0.71 to 0.96), P=0.01; between meta-analyses variance τ(2)=0.13). Adjustments for sample size, type of centre, funding, items of risk of bias, post-randomisation exclusions, and variance of log odds ratio yielded consistent results (0.80 (0.69 to 0.94), P=0.005; τ(2)=0.08). After exclusion of five influential studies, results remained consistent (0.85 (0.75 to 0.98); τ(2)=0.08). The comparison between mITT trials and no ITT trials showed no statistical difference between the two groups (adjusted ratio of odds ratios 0.92 (0.70 to 1.23); τ(2)=0.57). Trials that deviated from the intention to treat analysis showed larger intervention effects than trials that reported the standard approach. Where an intention to treat analysis is impossible to perform, authors should clearly report who is included in the analysis and attempt to perform multiple imputations. © Abraha et al 2015.

  2. Heart rate variability analysed by Poincaré plot in patients with metabolic syndrome.

    PubMed

    Kubičková, Alena; Kozumplík, Jiří; Nováková, Zuzana; Plachý, Martin; Jurák, Pavel; Lipoldová, Jolana

    2016-01-01

    The SD1 and SD2 indexes (standard deviations in two orthogonal directions of the Poincaré plot) carry similar information to the spectral density power of the high and low frequency bands but have the advantage of easier calculation and lesser stationarity dependence. ECG signals from metabolic syndrome (MetS) and control group patients during tilt table test under controlled breathing (20 breaths/minute) were obtained. SD1, SD2, SDRR (standard deviation of RR intervals) and RMSSD (root mean square of successive differences of RR intervals) were evaluated for 31 control group and 33 MetS subjects. Statistically significant lower values were observed in MetS patients in supine position (SD1: p=0.03, SD2: p=0.002, SDRR: p=0.006, RMSSD: p=0.01) and during tilt (SD2: p=0.004, SDRR: p=0.007). SD1 and SD2 combining the advantages of time and frequency domain methods, distinguish successfully between MetS and control subjects. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Blind guidance system based on laser triangulation

    NASA Astrophysics Data System (ADS)

    Wu, Jih-Huah; Wang, Jinner-Der; Fang, Wei; Lee, Yun-Parn; Shan, Yi-Chia; Kao, Hai-Ko; Ma, Shih-Hsin; Jiang, Joe-Air

    2012-05-01

    We propose a new guidance system for the blind. An optical triangulation method is used in the system. The main components of the proposed system comprise of a notebook computer, a camera, and two laser modules. The track image of the light beam on the ground or on the object is captured by the camera and then the image is sent to the notebook computer for further processing and analysis. Using a developed signal-processing algorithm, our system can determine the object width and the distance between the object and the blind person through the calculation of the light line positions on the image. A series of feasibility tests of the developed blind guidance system were conducted. The experimental results show that the distance between the test object and the blind can be measured with a standard deviation of less than 8.5% within the range of 40 and 130 cm, while the test object width can be measured with a standard deviation of less than 4.5% within the range of 40 and 130 cm. The application potential of the designed system to the blind guidance can be expected.

  4. BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertulani, C. A.; Fuqua, J.; Hussein, M. S.

    The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less

  5. Photographic positions for the first eight satellites of Saturn

    NASA Astrophysics Data System (ADS)

    Veiga, C. H.; Vieira Martins, R.

    1999-10-01

    Astrometric positions of the first eight Saturnian satellites obtained from 138 photographic plates taken in 30 nights in the years 1982 to 1988 are presented. All positions are compared with those calculated by the theory TASS1.7 \\cite[(Vienne & Duriez 1998)]{vie98}. The observed minus calculated residuals give rise to standard deviations smaller than 0.3". Based on observations made at Laboratório Nacional de Astrofísica/CNPq/MCT-Itajubá-Brazil. Please send offprint requests to C.H. Veiga. Table 2 is only available at the CDS via anonymous ftp to (130.79.128.5) cdsarc.u-strasbg.fr or via http://cdsweb.u-strasbg.fr/Abstract.html

  6. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  7. Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2006-01-01

    A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.

  8. Quantitative Evaluation for Differentiating Malignant and Benign Thyroid Nodules Using Histogram Analysis of Grayscale Sonograms.

    PubMed

    Nam, Se Jin; Yoo, Jaeheung; Lee, Hye Sun; Kim, Eun-Kyung; Moon, Hee Jung; Yoon, Jung Hyun; Kwak, Jin Young

    2016-04-01

    To evaluate the diagnostic value of histogram analysis using grayscale sonograms for differentiation of malignant and benign thyroid nodules. From July 2013 through October 2013, 579 nodules in 563 patients who had undergone ultrasound-guided fine-needle aspiration were included. For the grayscale histogram analysis, pixel echogenicity values in regions of interest were measured as 0 to 255 (0, black; 255, white) with in-house software. Five parameters (mean, skewness, kurtosis, standard deviation, and entropy) were obtained for each thyroid nodule. With principal component analysis, an index was derived. Diagnostic performance rates for the 5 histogram parameters and the principal component analysis index were calculated. A total of 563 patients were included in the study (mean age ± SD, 50.3 ± 12.3 years;range, 15-79 years). Of the 579 nodules, 431 were benign, and 148 were malignant. Among the 5 parameters and the principal component analysis index, the standard deviation (75.546 ± 14.153 versus 62.761 ± 16.01; P < .001), kurtosis (3.898 ± 2.652 versus 6.251 ± 9.102; P < .001), entropy (0.16 ± 0.135 versus 0.239 ± 0.185; P < .001), and principal component analysis index (-0.386±0.774 versus 0.134 ± 0.889; P < .001) were significantly different between the malignant and benign nodules. With the calculated cutoff values, the areas under the curve were 0.681 (95% confidence interval, 0.643-0.721) for standard deviation, 0.661 (0.620-0.703) for principal component analysis index, 0.651 (0.607-0.691) for kurtosis, 0.638 (0.596-0.681) for entropy, and 0.606 (0.563-0.647) for skewness. The subjective analysis of grayscale sonograms by radiologists alone showed an area under the curve of 0.861 (0.833-0.888). Grayscale histogram analysis was feasible for differentiating malignant and benign thyroid nodules but did not show better diagnostic performance than subjective analysis performed by radiologists. Further technical advances will be needed to objectify interpretations of thyroid grayscale sonograms. © 2016 by the American Institute of Ultrasound in Medicine.

  9. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  10. Calculation of the standard partial molal thermodynamic properties and dissociation constants of aqueous HCl{sup 0} and HBr{sup 0} at temperatures to 1000 C and pressures to 5 kbar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokrovskii, V.A.

    1999-04-01

    Dissociation constants of aqueous ion pairs HCl{sup 0} and HBr{sup 0} derived in the literature from vapor pressure and supercritical conductance measurements (Quist and Marshall, 1968b; Frantz and Marshall, 1984) were used to calculate the standard partial molal thermodynamic properties of the species at 25 C and 1 bar. Regression of the data with the aid of revised Helgeson-Kirkham-Flowers equations of state (Helgeson et al., 1981; Tanger and Helgeson, 1988; Shock et al., 1989) resulted in a set of equations-of-state parameters that permits accurate calculation of the thermodynamic properties of the species at high temperatures and pressures. These properties andmore » parameters reproduce generally within 0.1 log unit (with observed maximum deviation of 0.23 log unit) the log K values for HBr{sup 0} and HCl{sup 0} given by Quist and Marshall (1968b) and Frantz and Marshall (1984), respectively, at temperatures to 800 C and pressures to 5 kbar.« less

  11. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Reliability and Repetition Effect of the Center of Pressure and Kinematics Parameters That Characterize Trunk Postural Control During Unstable Sitting Test.

    PubMed

    Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J

    2017-03-01

    Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 < ICC < 0.86; 10.3 < SEM < 13.0), whereas standard deviation only showed a low to moderate reliability (0.37 < ICC < 0.61; 14.5 < SEM < 23.0). Regarding the kinematic variables, only pelvis displacement mean velocity achieved a high reliability using the average method (0.62 < ICC < 0.83; 18.8 < SEM < 23.1). Correlations between COP and kinematics were high only for mean velocity (0.45

  13. SU-E-J-32: Calypso(R) and Laser-Based Localization Systems Comparison for Left-Sided Breast Cancer Patients Using Deep Inspiration Breath Hold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, S; Kaurin, D; Sweeney, L

    2014-06-01

    Purpose: Our institution uses a manual laser-based system for primary localization and verification during radiation treatment of left-sided breast cancer patients using deep inspiration breath hold (DIBH). This primary system was compared with sternum-placed Calypso(R) beacons (Varian Medical Systems, CA). Only intact breast patients are considered for this analysis. Methods: During computed tomography (CT) simulation, patients have BB and Calypso(R) surface beacons positioned sternally and marked for free-breathing and DIBH CTs. During dosimetry planning, BB longitudinal displacement between free breathing and DIBH CT determines laser mark (BH mark) location. Calypso(R) beacon locations from the DIBH CT are entered at themore » Tracking Station. During Linac simulation and treatment, patients inhale until the cross-hair and/or lasers coincide with the BH Mark, which can be seen using our high quality cameras (Pelco, CA). Daily Calypso(R) displacement values (difference from the DIBH-CT-based plan) are recorded.The displacement mean and standard deviation was calculated for each patient (77 patients, 1845 sessions). An aggregate mean and standard deviation was calculated weighted by the number of patient fractions.Some patients were shifted based on MV ports. A second data set was calculated with Calypso(R) values corrected by these shifts. Results: Mean displacement values indicate agreement within 1±3mm, with improvement for shifted data (Table). Conclusion: Both unshifted and shifted data sets show the Calypso(R) system coincides with the laser system within 1±3mm, demonstrating either localization/verification system will Resultin similar clinical outcomes. Displacement value uncertainty unilaterally reduces when shifts are taken into account.« less

  14. Validation of the Oncentra Brachy Advanced Collapsed cone Engine for a commercial (192)Ir source using heterogeneous geometries.

    PubMed

    Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc

    2015-01-01

    To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  15. Residual standard deviation: Validation of a new measure of dual-task cost in below-knee prosthesis users.

    PubMed

    Howard, Charla L; Wallace, Chris; Abbas, James; Stokic, Dobrivoje S

    2017-01-01

    We developed and evaluated properties of a new measure of variability in stride length and cadence, termed residual standard deviation (RSD). To calculate RSD, stride length and cadence are regressed against velocity to derive the best fit line from which the variability (SD) of the distance between the actual and predicted data points is calculated. We examined construct, concurrent, and discriminative validity of RSD using dual-task paradigm in 14 below-knee prosthesis users and 13 age- and education-matched controls. Subjects walked first over an electronic walkway while performing separately a serial subtraction and backwards spelling task, and then at self-selected slow, normal, and fast speeds used to derive the best fit line for stride length and cadence against velocity. Construct validity was demonstrated by significantly greater increase in RSD during dual-task gait in prosthesis users than controls (group-by-condition interaction, stride length p=0.0006, cadence p=0.009). Concurrent validity was established against coefficient of variation (CV) by moderate-to-high correlations (r=0.50-0.87) between dual-task cost RSD and dual-task cost CV for both stride length and cadence in prosthesis users and controls. Discriminative validity was documented by the ability of dual-task cost calculated from RSD to effectively differentiate prosthesis users from controls (area under the receiver operating characteristic curve, stride length 0.863, p=0.001, cadence 0.808, p=0.007), which was better than the ability of dual-task cost CV (0.692, 0.648, respectively, not significant). These results validate RSD as a new measure of variability in below-knee prosthesis users. Future studies should include larger cohorts and other populations to ascertain its generalizability. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Odor measurements according to EN 13725: A statistical analysis of variance components

    NASA Astrophysics Data System (ADS)

    Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

    2014-04-01

    In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore suggested that the repeatability limit (r), as laid down in EN 13725, can be reduced from r ≤ 0.477 to r ≤ 0.31.

  17. Determining contrast medium dose and rate on basis of lean body weight: does this strategy improve patient-to-patient uniformity of hepatic enhancement during multi-detector row CT?

    PubMed

    Ho, Lisa M; Nelson, Rendon C; Delong, David M

    2007-05-01

    To prospectively evaluate the use of lean body weight (LBW) as the main determinant of the volume and rate of contrast material administration during multi-detector row computed tomography of the liver. This HIPAA-compliant study had institutional review board approval. All patients gave written informed consent. Four protocols were compared. Standard protocol involved 125 mL of iopamidol injected at 4 mL/sec. Total body weight (TBW) protocol involved 0.7 g iodine per kilogram of TBW. Calculated LBW and measured LBW protocols involved 0.86 g of iodine per kilogram and 0.92 g of iodine per kilogram calculated or measured LBW for men and women, respectively. Injection rate used for the three experimental protocols was determined proportionally on the basis of the calculated volume of contrast material. Postcontrast attenuation measurements during portal venous phase were obtained in liver, portal vein, and aorta for each group and were summed for each patient. Patient-to-patient enhancement variability in same group was measured with Levene test. Two-tailed t test was used to compare the three experimental protocols with the standard protocol. Data analysis was performed in 101 patients (25 or 26 patients per group), including 56 men and 45 women (mean age, 53 years). Average summed attenuation values for standard, TBW, calculated LBW, and measured LBW protocols were 419 HU +/- 50 (standard deviation), 443 HU +/- 51, 433 HU +/- 50, and 426 HU +/- 33, respectively (P = not significant for all). Levene test results for summed attenuation data for standard, TBW, calculated LBW, and measured LBW protocols were 40 +/- 29, 38 +/- 33 (P = .83), 35 +/- 35 (P = .56), and 26 +/- 19 (P = .05), respectively. By excluding highly variable but poorly perfused adipose tissue from calculation of contrast medium dose, the measured LBW protocol may lessen patient-to-patient enhancement variability while maintaining satisfactory hepatic and vascular enhancement.

  18. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, You; Yin, Fang-Fang; Ren, Lei, E-mail: lei.ren@duke.edu

    2015-08-15

    Purpose: Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Methods: Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated onmore » the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the “gold-standard” on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔD{sub min}), maximum dose (ΔD{sub max}), and mean dose (ΔD{sub mean}), and the absolute deviations of prescription dose coverage (ΔV{sub 100%}) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Results: Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). Conclusions: MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.« less

  19. Quantum entanglement with Freedman's inequality

    NASA Astrophysics Data System (ADS)

    Brody, Jed; Selton, Charlotte

    2018-06-01

    The assumption of local realism imposes constraints, such as Bell inequalities, on quantities obtained from measurements. In recent years, various tests of local realism have gained popularity in undergraduate laboratories, giving students the exciting opportunity to experimentally contradict this philosophical assumption. The standard test of the CHSH (Clauser-Horne-Shimony-Holt) Bell inequality requires 16 measurements, whereas a test of Freedman's inequality requires only three measurements. The calculations required to test Freedman's inequality are correspondingly simpler and the theory is less abstract. We suggest that students may benefit from testing Freedman's inequality before proceeding to the CHSH inequality and other more complicated experiments. Our measured data violated Freedman's inequality by more than six standard deviations.

  20. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience.

    PubMed

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-21

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.

  1. Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Yamazaki, Hiroshi

    2018-01-01

    We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.

  2. Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.

    PubMed

    Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R

    2016-11-01

    Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. An analysis of the readability of patient information and consent forms used in research studies in anaesthesia in Australia and New Zealand.

    PubMed

    Taylor, H E; Bramley, D E P

    2012-11-01

    The provision of written information is a component of the informed consent process for research participants. We conducted a readability analysis to test the hypothesis that the language used in patient information and consent forms in anaesthesia research in Australia and New Zealand does not meet the readability standards or expectations of the Good Clinical Practice Guidelines, the National Health and Medical Research Council in Australia and the Health Research Council of New Zealand. We calculated readability scores for 40 patient information and consent forms using the Simple Measure of Gobbledygook and Flesch-Kincaid formulas. The mean grade level of patient information and consent forms when using the Simple Measure of Gobbledygook and Flesch-Kincaid readability formulas was 12.9 (standard deviation of 0.8, 95% confidence interval 12.6 to 13.1) and 11.9 (standard deviation 1.1, 95% confidence interval 11.6 to 12.3), respectively. This exceeds the average literacy and comprehension of the general population in Australia and New Zealand. Complex language decreases readability and negatively impacts on the informed consent process. Care should be exercised when providing written information to research participants to ensure language and readability is appropriate for the audience.

  4. Determination of cyflumetofen residue in water, soil, and fruits by modified quick, easy, cheap, effective, rugged, and safe method coupled to gas chromatography/tandem mass spectrometry.

    PubMed

    Li, Minmin; Liu, Xingang; Dong, Fengshou; Xu, Jun; Qin, Dongmei; Zheng, Yongquan

    2012-10-01

    A new, highly sensitive, and selective method was developed for the determination of the cyflumetofen residue in water, soil, and fruits by using gas chromatography quadruple mass spectrometry. The target compound was extracted using acetonitrile and then cleaned up using dispersive solid-phase extraction with primary and secondary amine and graphitized carbon black, and optionally by a freezing-out cleanup step. The matrix-matched standards gave satisfactory recoveries and relative standard deviation values in different matrices at three fortified levels (0.05, 0.5, and 1.0 mg kg(-1) ). The overall average recoveries for this method in water, soil, and all fruits matrix at three fortified levels ranged from 76.3 to 101.5% with relative standard deviations in the range of 1.2-11.8% (n = 5). The calculated limits of detection and quantification were typically below 0.005 and 0.015 μg kg(-1), which were much lower than the maximum residue levels established by Japanese Positive List. This study provides a theoretical basis for China to draw up maximum residue level and analytical method for cyflumetofen acaricide in different fruits. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Precision and Deviation Comparison Between Icesat and Envisat in Typical Ice Gaining and Losing Regions of Antarctica

    NASA Astrophysics Data System (ADS)

    Du, W.; Chen, L.; Xie, H.; Hai, G.; Zhang, S.; Tong, X.

    2017-09-01

    This paper analyzes the precision and deviation of elevations acquired from Envisat and The Ice, Cloud and Land Elevation Satellite (ICESat) over typical ice gaining and losing regions, i.e. Lambert-Amery System (LAS) in east Antarctica, and Amundsen Sea Sector (ASS) in west Antarctica, during the same period from 2003 to 2008. We used GLA12 dataset of ICESat and Level 2 data of Envisat. Data preprocessing includes data filtering, projection transformation and track classification. Meanwhile, the slope correction is applied to Envisat data and saturation correction for ICESat data. Then the crossover analysis was used to obtain the crossing points of the ICESat tracks, Envisat tracks and ICESat-Envisat tracks separately. The two tracks we chose for cross-over analysis should be in the same campaign for ICESat (within 33 days) or the same cycle for Envisat (within 35 days).The standard deviation of a set of elevation residuals at time-coincident crossovers is calculated as the precision of each satellite while the mean value is calculated as the deviation of ICESat-Envisat. Generally, the ICESat laser altimeter gets a better precision than the Envisat radar altimeter. For Amundsen Sea Sector, the ICESat precision is found to vary from 8.9 cm to 17 cm and the Envisat precision varies from 0.81 m to 1.57 m. For LAS area, the ICESat precision is found to vary from 6.7 cm to 14.3 cm and the Envisat precision varies from 0.46 m to 0.81 m. Comparison result between Envisat and ICESat elevations shows a mean difference of 0.43 ±7.14 m for Amundsen Sea Sector and 0.53 ± 1.23 m over LAS.

  6. 40 CFR 90.708 - Cumulative Sum (CumSum) procedure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...

  7. Acoustic Correlates of Compensatory Adjustments to the Glottic and Supraglottic Structures in Patients with Unilateral Vocal Fold Paralysis

    PubMed Central

    2015-01-01

    The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690

  8. Dynamics of the standard deviations of three wind velocity components from the data of acoustic sounding

    NASA Astrophysics Data System (ADS)

    Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.

    2017-11-01

    Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.

  9. Vibrational investigation on FT-IR and FT-Raman spectra, IR intensity, Raman activity, peak resemblance, ideal estimation, standard deviation of computed frequencies analyses and electronic structure on 3-methyl-1,2-butadiene using HF and DFT (LSDA/B3LYP/B3PW91) calculations.

    PubMed

    Ramalingam, S; Jayaprakash, A; Mohan, S; Karabacak, M

    2011-11-01

    FT-IR and FT-Raman (4000-100 cm(-1)) spectral measurements of 3-methyl-1,2-butadiene (3M12B) have been attempted in the present work. Ab-initio HF and DFT (LSDA/B3LYP/B3PW91) calculations have been performed giving energies, optimized structures, harmonic vibrational frequencies, IR intensities and Raman activities. Complete vibrational assignments on the observed spectra are made with vibrational frequencies obtained by HF and DFT (LSDA/B3LYP/B3PW91) at 6-31G(d,p) and 6-311G(d,p) basis sets. The results of the calculations have been used to simulate IR and Raman spectra for the molecule that showed good agreement with the observed spectra. The potential energy distribution (PED) corresponding to each of the observed frequencies are calculated which confirms the reliability and precision of the assignment and analysis of the vibrational fundamentals modes. The oscillation of vibrational frequencies of butadiene due to the couple of methyl group is also discussed. A study on the electronic properties such as HOMO and LUMO energies, were performed by time-dependent DFT (TD-DFT) approach. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. The thermodynamic properties of the title compound at different temperatures reveal the correlations between standard heat capacities (C) standard entropies (S), and standard enthalpy changes (H). Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  10. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  11. Strategies to Prevent MRSA Transmission in Community-Based Nursing Homes: A Cost Analysis.

    PubMed

    Roghmann, Mary-Claire; Lydecker, Alison; Mody, Lona; Mullins, C Daniel; Onukwugha, Eberechukwu

    2016-08-01

    OBJECTIVE To estimate the costs of 3 MRSA transmission prevention scenarios compared with standard precautions in community-based nursing homes. DESIGN Cost analysis of data collected from a prospective, observational study. SETTING AND PARTICIPANTS Care activity data from 401 residents from 13 nursing homes in 2 states. METHODS Cost components included the quantities of gowns and gloves, time to don and doff gown and gloves, and unit costs. Unit costs were combined with information regarding the type and frequency of care provided over a 28-day observation period. For each scenario, the estimated costs associated with each type of care were summed across all residents to calculate an average cost and standard deviation for the full sample and for subgroups. RESULTS The average cost for standard precautions was $100 (standard deviation [SD], $77) per resident over a 28-day period. If gown and glove use for high-risk care was restricted to those with MRSA colonization or chronic skin breakdown, average costs increased to $137 (SD, $120) and $125 (SD, $109), respectively. If gowns and gloves were used for high-risk care for all residents in addition to standard precautions, the average cost per resident increased substantially to $223 (SD, $127). CONCLUSIONS The use of gowns and gloves for high-risk activities with all residents increased the estimated cost by 123% compared with standard precautions. This increase was ameliorated if specific subsets (eg, those with MRSA colonization or chronic skin breakdown) were targeted for gown and glove use for high-risk activities. Infect Control Hosp Epidemiol 2016;37:962-966.

  12. Constituent quarks and systematic errors in mid-rapidity charged multiplicity (dNch / dη distributions

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Michael

    2017-01-01

    Although it was demonstrated more than 13 years ago that the increase in midrapidity dNch / dη with increasing centrality of Au+Au collisions at RHIC was linearly proportional to the number of constituent quark participants (or ``wounded quarks'', QW) in the collision, it was only in the last few years that generating the spatial positions of the three quarks in a nucleon according to the Fourier transform of the measured electric charge form factor of the proton could be used to connect dNch / dη /QW as a function of centrality in p(d) +A and A +A collisions with the same value of dNch / dη /QW determined in p +p collisions. One calculation, which only compared its calculated dNch / dη /QW in p +p at √{sNN} = 200 GeV to the least central of 12 centrality bin measurements in Au +Au by PHENIX, claimed that the p +p value was higher by ``about 30%'' from the band of measurements vs. centrality. However the clearly quoted systematic errors were ignored for which a 1 standard deviation systematic shift would move all the 12 Au +Au data points to within 1.3 standard deviations of the p +p value, or if the statistical and systematic errors are added in quadrature a difference of 35 +/- 21%. Rearch supported by U.S. Department of Energy, Contract No. DE-SC0012704.

  13. Intra-individual reaction time variability and all-cause mortality over 17 years: a community-based cohort study.

    PubMed

    Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen

    2014-01-01

    very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.

  14. Methodology for the development of normative data for Spanish-speaking pediatric populations.

    PubMed

    Rivera, D; Arango-Lasprilla, J C

    2017-01-01

    To describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries. The study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests. Test-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again. This is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.

  15. Prediction of peak response values of structures with and without TMD subjected to random pedestrian flows

    NASA Astrophysics Data System (ADS)

    Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter

    2016-09-01

    In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.

  16. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M; Abazeed, M; Woody, N

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less

  17. Biodiversity mapping in a tropical West African forest with airborne hyperspectral data.

    PubMed

    Vaglio Laurin, Gaia; Cheung-Wai Chan, Jonathan; Chen, Qi; Lindsell, Jeremy A; Coomes, David A; Guerriero, Leila; Del Frate, Fabio; Miglietta, Franco; Valentini, Riccardo

    2014-01-01

    Tropical forests are major repositories of biodiversity, but are fast disappearing as land is converted to agriculture. Decision-makers need to know which of the remaining forests to prioritize for conservation, but the only spatial information on forest biodiversity has, until recently, come from a sparse network of ground-based plots. Here we explore whether airborne hyperspectral imagery can be used to predict the alpha diversity of upper canopy trees in a West African forest. The abundance of tree species were collected from 64 plots (each 1250 m(2) in size) within a Sierra Leonean national park, and Shannon-Wiener biodiversity indices were calculated. An airborne spectrometer measured reflectances of 186 bands in the visible and near-infrared spectral range at 1 m(2) resolution. The standard deviations of these reflectance values and their first-order derivatives were calculated for each plot from the c. 1250 pixels of hyperspectral information within them. Shannon-Wiener indices were then predicted from these plot-based reflectance statistics using a machine-learning algorithm (Random Forest). The regression model fitted the data well (pseudo-R(2) = 84.9%), and we show that standard deviations of green-band reflectances and infra-red region derivatives had the strongest explanatory powers. Our work shows that airborne hyperspectral sensing can be very effective at mapping canopy tree diversity, because its high spatial resolution allows within-plot heterogeneity in reflectance to be characterized, making it an effective tool for monitoring forest biodiversity over large geographic scales.

  18. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    PubMed

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  19. Evaluation of Free Breathing Versus Breath Hold Diffusion Weighted Imaging in Terms Apparent Diffusion Coefficient (ADC) and Signal-to-Noise Ratio (SNR) Values for Solid Abdominal Organs.

    PubMed

    Herek, Duygu; Karabulut, Nevzat; Kocyıgıt, Ali; Yagcı, Ahmet Baki

    2016-01-01

    Our aim was to compare the apparent diffusion coefficient (ADC) values of normal abdominal parenchymal organs and signal-to-noise ratio (SNR) measurements in the same patients with breath hold (BH) and free breathing (FB) diffusion weighted imaging (DWI). Forty-eight patients underwent both BH and FB DWI. Spherical region of interest (ROI) was placed on the right hepatic lobe, spleen, pancreas, and renal cortices. ADC values were calculated for each organ on each sequence using an automated software. Image noise, defined as the standard deviation (SD) of the signal intensities in the most artifact-free area of the image background was measured by placing the largest possible ROI on either the left or the right side of the body outside the object in the recorded field of view. SNR was calculated using the formula: SNR=signal intensity (SI) (organ) /standard deviation (SD) (noise) . There were no statistically significant differences in ADC values of the abdominal organs between BH and FB DWI sequences ( p >0.05). There were statistically significant differences between SNR values of organs on BH and FB DWIs. SNRs were found to be better on FB DWI than BH DWI ( p <0.001). Free breathing DWI technique reduces image noise and increases SNR for abdominal examinations. Free breathing technique is therefore preferable to BH DWI in the evaluation of abdominal organs by DWI.

  20. Biodiversity Mapping in a Tropical West African Forest with Airborne Hyperspectral Data

    PubMed Central

    Vaglio Laurin, Gaia; Chan, Jonathan Cheung-Wai; Chen, Qi; Lindsell, Jeremy A.; Coomes, David A.; Guerriero, Leila; Frate, Fabio Del; Miglietta, Franco; Valentini, Riccardo

    2014-01-01

    Tropical forests are major repositories of biodiversity, but are fast disappearing as land is converted to agriculture. Decision-makers need to know which of the remaining forests to prioritize for conservation, but the only spatial information on forest biodiversity has, until recently, come from a sparse network of ground-based plots. Here we explore whether airborne hyperspectral imagery can be used to predict the alpha diversity of upper canopy trees in a West African forest. The abundance of tree species were collected from 64 plots (each 1250 m2 in size) within a Sierra Leonean national park, and Shannon-Wiener biodiversity indices were calculated. An airborne spectrometer measured reflectances of 186 bands in the visible and near-infrared spectral range at 1 m2 resolution. The standard deviations of these reflectance values and their first-order derivatives were calculated for each plot from the c. 1250 pixels of hyperspectral information within them. Shannon-Wiener indices were then predicted from these plot-based reflectance statistics using a machine-learning algorithm (Random Forest). The regression model fitted the data well (pseudo-R2 = 84.9%), and we show that standard deviations of green-band reflectances and infra-red region derivatives had the strongest explanatory powers. Our work shows that airborne hyperspectral sensing can be very effective at mapping canopy tree diversity, because its high spatial resolution allows within-plot heterogeneity in reflectance to be characterized, making it an effective tool for monitoring forest biodiversity over large geographic scales. PMID:24937407

  1. Mean and Fluctuating Force Distribution in a Random Array of Spheres

    NASA Astrophysics Data System (ADS)

    Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan

    2015-11-01

    This study presents a numerical study of the force distribution within a cluster of mono-disperse spherical particles. A direct forcing immersed boundary method is used to calculate the forces on individual particles for a volume fraction range of [0.1, 0.4] and a Reynolds number range of [10, 625]. The overall drag is compared to several drag laws found in the literature. As for the fluctuation of the hydrodynamic streamwise force among individual particles, it is shown to have a normal distribution with a standard deviation that varies with the volume fraction only. The standard deviation remains approximately 25% of the mean streamwise force on a single sphere. The force distribution shows a good correlation between the location of two to three nearest upstream and downstream neighbors and the magnitude of the forces. A detailed analysis of the pressure and shear forces contributions calculated on a ghost sphere in the vicinity of a single particle in a uniform flow reveals a mapping of those contributions. The combination of the mapping and number of nearest neighbors leads to a first order correction of the force distribution within a cluster which can be used in Lagrangian-Eulerian techniques. We also explore the possibility of a binary force model that systematically accounts for the effect of the nearest neighbors. This work was supported by the National Science Foundation (NSF OISE-0968313) under Partnership for International Research and Education (PIRE) in Multiphase Flows at the University of Florida.

  2. Benchmarking the Bethe–Salpeter Formalism on a Standard Organic Molecular Set

    PubMed Central

    2015-01-01

    We perform benchmark calculations of the Bethe–Salpeter vertical excitation energies for the set of 28 molecules constituting the well-known Thiel’s set, complemented by a series of small molecules representative of the dye chemistry field. We show that Bethe–Salpeter calculations based on a molecular orbital energy spectrum obtained with non-self-consistent G0W0 calculations starting from semilocal DFT functionals dramatically underestimate the transition energies. Starting from the popular PBE0 hybrid functional significantly improves the results even though this leads to an average −0.59 eV redshift compared to reference calculations for Thiel’s set. It is shown, however, that a simple self-consistent scheme at the GW level, with an update of the quasiparticle energies, not only leads to a much better agreement with reference values, but also significantly reduces the impact of the starting DFT functional. On average, the Bethe–Salpeter scheme based on self-consistent GW calculations comes close to the best time-dependent DFT calculations with the PBE0 functional with a 0.98 correlation coefficient and a 0.18 (0.25) eV mean absolute deviation compared to TD-PBE0 (theoretical best estimates) with a tendency to be red-shifted. We also observe that TD-DFT and the standard adiabatic Bethe–Salpeter implementation may differ significantly for states implying a large multiple excitation character. PMID:26207104

  3. N2/O2/H2 Dual-Pump Cars: Validation Experiments

    NASA Technical Reports Server (NTRS)

    OByrne, S.; Danehy, P. M.; Cutler, A. D.

    2003-01-01

    The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.

  4. An efficient method to determine double Gaussian fluence parameters in the eclipse™ proton pencil beam model.

    PubMed

    Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin

    2016-12-01

    To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.

  5. Temperature dependence of current-and capacitance-voltage characteristics of an Au/4H-SiC Schottky diode

    NASA Astrophysics Data System (ADS)

    Gülnahar, Murat

    2014-12-01

    In this study, the current-voltage (I-V) and capacitance-voltage (C-V) measurements of an Au/4H-SiC Schottky diode are characterized as a function of the temperature in 50-300 K temperature range. The experimental parameters such as ideality factor and apparent barrier height presents to be strongly temperature dependent, that is, the ideality factor increases and the apparent barrier height decreases with decreasing temperature, whereas the barrier height values increase with the temperature for C-V data. Likewise, the Richardson plot deviates at low temperatures. These anomaly behaviors observed for Au/4H-SiC are attributed to Schottky barrier inhomogeneities. The barrier anomaly which relates to interface of Au/4H-SiC is also confirmed by the C-V measurements versus the frequency measured in 300 K and it is interpreted by both Tung's lateral inhomogeneity model and multi-Gaussian distribution approach. The values of the weighting coefficients, standard deviations and mean barrier height are calculated for each distribution region of Au/4H-SiC using the multi-Gaussian distribution approach. In addition, the total effective area of the patches NAe is obtained at separate temperatures and as a result, it is expressed that the low barrier regions influence meaningfully to the current transport at the junction. The homogeneous barrier height value is calculated from the correlation between the ideality factor and barrier height and it is noted that the values of standard deviation from ideality factor versus q/3kT curve are in close agreement with the values obtained from the barrier height versus q/2kT variation. As a result, it can be concluded that the temperature dependent electrical characteristics of Au/4H-SiC can be successfully commented on the basis of the thermionic emission theory with both models.

  6. Quantitative comparison between a multiecho sequence and a single-echo sequence for susceptibility-weighted phase imaging.

    PubMed

    Gilbert, Guillaume; Savard, Geneviève; Bard, Céline; Beaudoin, Gilles

    2012-06-01

    The aim of this study was to investigate the benefits arising from the use of a multiecho sequence for susceptibility-weighted phase imaging using a quantitative comparison with a standard single-echo acquisition. Four healthy adult volunteers were imaged on a clinical 3-T system using a protocol comprising two different three-dimensional susceptibility-weighted gradient-echo sequences: a standard single-echo sequence and a multiecho sequence. Both sequences were repeated twice in order to evaluate the local noise contribution by a subtraction of the two acquisitions. For the multiecho sequence, the phase information from each echo was independently unwrapped, and the background field contribution was removed using either homodyne filtering or the projection onto dipole fields method. The phase information from all echoes was then combined using a weighted linear regression. R2 maps were also calculated from the multiecho acquisitions. The noise standard deviation in the reconstructed phase images was evaluated for six manually segmented regions of interest (frontal white matter, posterior white matter, globus pallidus, putamen, caudate nucleus and lateral ventricle). The use of the multiecho sequence for susceptibility-weighted phase imaging led to a reduction of the noise standard deviation for all subjects and all regions of interest investigated in comparison to the reference single-echo acquisition. On average, the noise reduction ranged from 18.4% for the globus pallidus to 47.9% for the lateral ventricle. In addition, the amount of noise reduction was found to be strongly inversely correlated to the estimated R2 value (R=-0.92). In conclusion, the use of a multiecho sequence is an effective way to decrease the noise contribution in susceptibility-weighted phase images, while preserving both contrast and acquisition time. The proposed approach additionally permits the calculation of R2 maps. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Multielement trace determination in SiC powders: assessment of interlaboratory comparisons aimed at the validation and standardization of analytical procedures with direct solid sampling based on ETV ICP OES and DC arc OES.

    PubMed

    Matschat, Ralf; Hassler, Jürgen; Traub, Heike; Dette, Angelika

    2005-12-01

    The members of the committee NMP 264 "Chemical analysis of non-oxidic raw and basic materials" of the German Standards Institute (DIN) have organized two interlaboratory comparisons for multielement determination of trace elements in silicon carbide (SiC) powders via direct solid sampling methods. One of the interlaboratory comparisons was based on the application of inductively coupled plasma optical emission spectrometry with electrothermal vaporization (ETV ICP OES), and the other on the application of optical emission spectrometry with direct current arc (DC arc OES). The interlaboratory comparisons were organized and performed in the framework of the development of two standards related to "the determination of mass fractions of metallic impurities in powders and grain sizes of ceramic raw and basic materials" by both methods. SiC powders were used as typical examples of this category of material. The aim of the interlaboratory comparisons was to determine the repeatability and reproducibility of both analytical methods to be standardized. This was an important contribution to the practical applicability of both draft standards. Eight laboratories participated in the interlaboratory comparison with ETV ICP OES and nine in the interlaboratory comparison with DC arc OES. Ten analytes were investigated by ETV ICP OES and eleven by DC arc OES. Six different SiC powders were used for the calibration. The mass fractions of their relevant trace elements were determined after wet chemical digestion. All participants followed the analytical requirements described in the draft standards. In the calculation process, three of the calibration materials were used successively as analytical samples. This was managed in the following manner: the material that had just been used as the analytical sample was excluded from the calibration, so the five other materials were used to establish the calibration plot. The results from the interlaboratory comparisons were summarized and used to determine the repeatability and the reproducibility (expressed as standard deviations) of both methods. The calculation was carried out according to the related standard. The results are specified and discussed in this paper, as are the optimized analytical conditions determined and used by the authors of this paper. For both methods, the repeatability relative standard deviations were <25%, usually ~10%, and the reproducibility relative standard deviations were <35%, usually ~15%. These results were regarded as satifactory for both methods intended for rapid analysis of materials for which decomposition is difficult and time-consuming. Also described are some results from an interlaboratory comparison used to certify one of the materials that had been previously used for validation in both interlaboratory comparisons. Thirty laboratories (from eight countries) participated in this interlaboratory comparison for certification. As examples, accepted results are shown from laboratories that used ETV ICP OES or DC arc OES and had performed calibrations by using solutions or oxides, respectively. The certified mass fractions of the certified reference materials were also compared with the mass fractions determined in the interlaboratory comparisons performed within the framework of method standardization. Good agreement was found for most of the analytes.

  8. Evaluating deviations in prostatectomy patients treated with IMRT.

    PubMed

    Sá, Ana Cravo; Peres, Ana; Pereira, Mónica; Coelho, Carina Marques; Monsanto, Fátima; Macedo, Ana; Lamas, Adrian

    2016-01-01

    To evaluate the deviations in prostatectomy patients treated with IMRT in order to calculate appropriate margins to create the PTV. Defining inappropriate margins can lead to underdosing in target volumes and also overdosing in healthy tissues, increasing morbidity. 223 CBCT images used for alignment with the CT planning scan based on bony anatomy were analyzed in 12 patients treated with IMRT following prostatectomy. Shifts of CBCT images were recorded in three directions to calculate the required margin to create PTV. The mean and standard deviation (SD) values in millimetres were -0.05 ± 1.35 in the LR direction, -0.03 ± 0.65 in the SI direction and -0.02 ± 2.05 the AP direction. The systematic error measured in the LR, SI and AP direction were 1.35 mm, 0.65 mm, and 2.05 mm with a random error of 2.07 mm; 1.45 mm and 3.16 mm, resulting in a PTV margin of 4.82 mm; 2.64 mm, and 7.33 mm, respectively. With IGRT we suggest a margin of 5 mm, 3 mm and 8 mm in the LR, SI and AP direction, respectively, to PTV1 and PTV2. Therefore, this study supports an anisotropic margin expansion to the PTV being the largest expansion in the AP direction and lower in SI.

  9. Role of dispersion corrected hybrid GGA class in accurately calculating the bond dissociation energy of carbon halogen bond: A benchmark study

    NASA Astrophysics Data System (ADS)

    Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid

    2017-12-01

    Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.

  10. Articular Cartilage: Evaluation with Fluid-suppressed 7.0-T Sodium MR Imaging in Subjects with and Subjects without Osteoarthritis

    PubMed Central

    Babb, James; Xia, Ding; Chang, Gregory; Krasnokutsky, Svetlana; Abramson, Steven B.; Jerschow, Alexej; Regatte, Ravinder R.

    2013-01-01

    Purpose: To assess the potential use of sodium magnetic resonance (MR) imaging of cartilage, with and without fluid suppression by using an adiabatic pulse, for classifying subjects with versus subjects without osteoarthritis at 7.0 T. Materials and Methods: The study was approved by the institutional review board and was compliant with HIPAA. The knee cartilage of 19 asymptomatic (control subjects) and 28 symptomatic (osteoarthritis patients) subjects underwent 7.0-T sodium MR imaging with use of two different sequences: one without fluid suppression (radial three-dimensional sequence) and one with fluid suppression (inversion recovery [IR] wideband uniform rate and smooth truncation [WURST]). Fluid suppression was obtained by using IR with an adiabatic inversion pulse (WURST pulse). Mean sodium concentrations and their standard deviations were measured in the patellar, femorotibial medial, and lateral cartilage regions over four consecutive sections for each subject. The minimum, maximum, median, and average means and standard deviations were calculated over all measurements for each subject. The utility of these measures in the detection of osteoarthritis was evaluated by using logistic regression and the area under the receiver operating characteristic curve (AUC). Bonferroni correction was applied to the P values obtained with logistic regression. Results: Measurements from IR WURST were found to be significant predicators of all osteoarthritis (Kellgren-Lawrence score of 1–4) and early osteoarthritis (Kellgren-Lawrence score of 1 or 2). The minimum standard deviation provided the highest AUC (0.83) with the highest accuracy (>78%), sensitivity (>82%), and specificity (>74%) for both all osteoarthritis and early osteoarthritis groups. Conclusion: Quantitative sodium MR imaging at 7.0 T with fluid suppression by using adiabatic IR is a potential biomarker for osteoarthritis. © RSNA, 2013 PMID:23468572

  11. Estimating Mixed Broadleaves Forest Stand Volume Using Dsm Extracted from Digital Aerial Images

    NASA Astrophysics Data System (ADS)

    Sohrabi, H.

    2012-07-01

    In mixed old growth broadleaves of Hyrcanian forests, it is difficult to estimate stand volume at plot level by remotely sensed data while LiDar data is absent. In this paper, a new approach has been proposed and tested for estimating stand forest volume. The approach is based on this idea that forest volume can be estimated by variation of trees height at plots. In the other word, the more the height variation in plot, the more the stand volume would be expected. For testing this idea, 120 circular 0.1 ha sample plots with systematic random design has been collected in Tonekaon forest located in Hyrcanian zone. Digital surface model (DSM) measure the height values of the first surface on the ground including terrain features, trees, building etc, which provides a topographic model of the earth's surface. The DSMs have been extracted automatically from aerial UltraCamD images so that ground pixel size for extracted DSM varied from 1 to 10 m size by 1m span. DSMs were checked manually for probable errors. Corresponded to ground samples, standard deviation and range of DSM pixels have been calculated. For modeling, non-linear regression method was used. The results showed that standard deviation of plot pixels with 5 m resolution was the most appropriate data for modeling. Relative bias and RMSE of estimation was 5.8 and 49.8 percent, respectively. Comparing to other approaches for estimating stand volume based on passive remote sensing data in mixed broadleaves forests, these results are more encouraging. One big problem in this method occurs when trees canopy cover is totally closed. In this situation, the standard deviation of height is low while stand volume is high. In future studies, applying forest stratification could be studied.

  12. Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Thompson, Bruce

    2009-01-01

    Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…

  13. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...

  14. Analytical ab initio potential-energy surfaces for the ground and the first singlet excited states of HeH 2

    NASA Astrophysics Data System (ADS)

    Farantos, Stavros C.; Murrell, J. N.; Carter, S.

    1984-07-01

    Analytical potential-energy surfaces have been constructed for the ground and the first excited states of HeH 2. The functions fit ab initio MRD CI calculations with standard deviations of 0.05 and 0.13 eV for the ground and the excited surface respectively. Classical trajectory calculations for collisions of 4Hc with HD(B 1Σ u+, υ = 3, J = 2) at the temperature T = 297 K yields the electronic quenching cross section σ Q = 6.5 A 2 and the vibrational cross section σ 3→2 = 3.8 A 2. The results are in qualitative agreement with the experimental values of Fink, Akins and Moore.

  15. Nuclear mass formula with the shell energies obtained by a new method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koura, H.; Tachibana, T.; Yamada, M.

    1998-12-21

    Nuclear shapes and masses are estimated by a new method. The main feature of this method lies in estimating shell energies of deformed nuclei from spherical shell energies by mixing them with appropriate weights. The spherical shell energies are calculated from single-particle potentials, and, till now, two mass formulas have been constructed from two different sets of potential parameters. The standard deviation of the calculated masses from all the experimental masses of the 1995 Mass Evaluation is about 760 keV. Contrary to the mass formula by Tachibana, Uno, Yamada and Yamada in the 1987-1988 Atomic Mass Predictions, the present formulasmore » can give nuclear shapes and predict on super-heavy elements.« less

  16. CCD observations of Phoebe

    NASA Astrophysics Data System (ADS)

    Veiga, C. H.; Vieira Martins, R.; Andrei, A. H.

    2000-02-01

    Astromeric CCD positions of the Saturnian satellite Phoebe obtained from 60 frames taken in 10 nights are presented. The observations were distributed between 5 missions in the years 1995 to 1997. For the astrometric calibration the USNO-A2.0 Catalogue is used. All positions are compared with those calculated by Jacobson (1998a) and Bec-Borsenberger & Rocher (1982). The residuals have mean and standard deviation smaller than 0farcs 5, in the x and y directions. The distribution of residuals is suggestive of the need of an improvement for the orbit calculations. Based on observations made at Laboratório Nacional de Astrofísica/CNPq/MCT-Itajubá-Brazil. Please send offprint requests to C.H. Veiga. Table 1 is only available at http://www.edpsciences.org

  17. The effects of auditory stimulation with music on heart rate variability in healthy women.

    PubMed

    Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de

    2013-07-01

    There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.

  18. The effects of auditory stimulation with music on heart rate variability in healthy women

    PubMed Central

    Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos

    2013-01-01

    OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660

  19. Comparing biomarker measurements to a normal range: when ...

    EPA Pesticide Factsheets

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  20. The mechanical properties of high speed GTAW weld and factors of nonlinear multiple regression model under external transverse magnetic field

    NASA Astrophysics Data System (ADS)

    Lu, Lin; Chang, Yunlong; Li, Yingmin; He, Youyou

    2013-05-01

    A transverse magnetic field was introduced to the arc plasma in the process of welding stainless steel tubes by high-speed Tungsten Inert Gas Arc Welding (TIG for short) without filler wire. The influence of external magnetic field on welding quality was investigated. 9 sets of parameters were designed by the means of orthogonal experiment. The welding joint tensile strength and form factor of weld were regarded as the main standards of welding quality. A binary quadratic nonlinear regression equation was established with the conditions of magnetic induction and flow rate of Ar gas. The residual standard deviation was calculated to adjust the accuracy of regression model. The results showed that, the regression model was correct and effective in calculating the tensile strength and aspect ratio of weld. Two 3D regression models were designed respectively, and then the impact law of magnetic induction on welding quality was researched.

  1. Relativistic MR–MP Energy Levels for L-shell Ions of Silicon

    DOE PAGES

    Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter

    2018-01-15

    Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less

  2. Surface characterization of graphene based materials

    NASA Astrophysics Data System (ADS)

    Pisarek, M.; Holdynski, M.; Krawczyk, M.; Nowakowski, R.; Roguska, A.; Malolepszy, A.; Stobinski, L.; Jablonski, A.

    2016-12-01

    In the present study, two kind of samples were used: (i) a monolayer graphene film with a thickness of 0.345 nm deposited by the CVD method on Cu foil, (ii) graphene flakes obtained by modified Hummers method and followed by reduction of graphene oxide. The inelastic mean free path (IMFP), characterizing electron transport in graphene/Cu sample and reduced graphene oxide material, which determines the sampling depth of XPS and AES were evaluated from relative Elastic Peak Electron Spectroscopy (EPES) measurements with the Au standard in the energy range 0.5-2 keV. The measured IMFPs were compared with IMFPs resulting from experimental optical data published in the literature for the graphite sample. The EPES IMFP values at 0.5 and 1.5 keV was practically identical to that calculated from optical data for graphite (less than 4% deviation). For energies 1 and 2 keV, the EPES IMFPs for rGO were deviated up to 14% from IMFPs calculated using the optical data by Tanuma et al. [1]. Before EPES measurements all samples were characterized by various techniques like: FE-SEM, AFM, XPS, AES and REELS to visualize the surface morphology/topography and identify the chemical composition.

  3. Accuracy of Digital Impressions and Fitness of Single Crowns Based on Digital Impressions

    PubMed Central

    Yang, Xin; Lv, Pin; Liu, Yihong; Si, Wenjie; Feng, Hailan

    2015-01-01

    In this study, the accuracy (precision and trueness) of digital impressions and the fitness of single crowns manufactured based on digital impressions were evaluated. #14-17 epoxy resin dentitions were made, while full-crown preparations of extracted natural teeth were embedded at #16. (1) To assess precision, deviations among repeated scan models made by intraoral scanner TRIOS and MHT and model scanner D700 and inEos were calculated through best-fit algorithm and three-dimensional (3D) comparison. Root mean square (RMS) and color-coded difference images were offered. (2) To assess trueness, micro computed tomography (micro-CT) was used to get the reference model (REF). Deviations between REF and repeated scan models (from (1)) were calculated. (3) To assess fitness, single crowns were manufactured based on TRIOS, MHT, D700 and inEos scan models. The adhesive gaps were evaluated under stereomicroscope after cross-sectioned. Digital impressions showed lower precision and better trueness. Except for MHT, the means of RMS for precision were lower than 10 μm. Digital impressions showed better internal fitness. Fitness of single crowns based on digital impressions was up to clinical standard. Digital impressions could be an alternative method for single crowns manufacturing. PMID:28793417

  4. Relativistic MR–MP Energy Levels for L-shell Ions of Silicon

    NASA Astrophysics Data System (ADS)

    Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter

    2018-01-01

    Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si V to 0.04 eV in Si XII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.

  5. VizieR Online Data Catalog: Relativistic MR-MP energy levels for Si (Santana+, 2018)

    NASA Astrophysics Data System (ADS)

    Santana, J. A.; Lopez-Dauphin, N. A.; Beiersdorfer, P.

    2018-03-01

    Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi- Reference Moller-Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20eV in SiV to 0.04eV in SiXII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements. (1 data file).

  6. Relativistic MR–MP Energy Levels for L-shell Ions of Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter

    Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less

  7. USL/DBMS NASA/PC R and D project C programming standards

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1984-01-01

    A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.

  8. Standard deviation index for stimulated Brillouin scattering suppression with different homogeneities.

    PubMed

    Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei

    2016-05-10

    We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.

  9. Design of an optical PPM communication link in the presence of component tolerances

    NASA Technical Reports Server (NTRS)

    Chen, C.-C.

    1988-01-01

    A systematic approach is described for estimating the performance of an optical direct detection pulse position modulation (PPM) communication link in the presence of parameter tolerances. This approach was incorporated into the JPL optical link analysis program to provide a useful tool for optical link design. Given a set of system parameters and their tolerance specifications, the program will calculate the nominal performance margin and its standard deviation. Through use of these values, the optical link can be designed to perform adequately even under adverse operating conditions.

  10. Distributed activation energy model parameters of some Turkish coals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunes, M.; Gunes, S.K.

    2008-07-01

    A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.

  11. SNPP VIIRS RSB Earth View Reflectance Uncertainty

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Twedt, Kevin; McIntire, Jeff; Xiong, Xiaoxiong

    2017-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite uses its 14 reflective solar bands to passively collect solar radiant energy reflected off the Earth. The Level 1 product is the geolocated and radiometrically calibrated top-of- the-atmosphere solar reflectance. The absolute radiometric uncertainty associated with this product includes contributions from the noise associated with measured detector digital counts and the radiometric calibration bias. Here, we provide a detailed algorithm for calculating the estimated standard deviation of the retrieved top-of-the-atmosphere spectral solar radiation reflectance.

  12. Computerized measurement and analysis of scoliosis: a more accurate representation of the shape of the curve.

    PubMed

    Jeffries, B F; Tarlton, M; De Smet, A A; Dwyer, S J; Brower, A C

    1980-02-01

    A computer program was created to identify and accept spatial data regarding the location of the thoracic and lumbar vertebral bodies on scoliosis films. With this information, the spine can be mathematically reconstructed and a scoliotic angle calculated. There was a 0.968 positive correlation between the computer and manual methods of measuring scoliosis. The computer method was more reproducible with a standard deviation of only 1.3 degrees. Computerized measurement of scoliosis also provides better evaluation of the true shape of the curve.

  13. Refractive index and birefringence of 2H silicon carbide.

    NASA Technical Reports Server (NTRS)

    Powell, J. A.

    1972-01-01

    Measurement of the refractive indices of 2H SiC over the wavelength range from 435.8 to 650.9 nm by the method of minimum deviation. A curve fit of the experimental data to the Cauchy dispersion equation yielded, for the ordinary index, n sub zero = 2.5513 + 25,850/lambda squared + 8.928 x 10 to the 8th power/lambda to the 4th power and, for the extraordinary index, n sub e = 2.6161 + 28,230/lambda squared + 11.490 x 10 to the 8th power/lambda to the 4th power when lambda is expressed in nm. The estimated error (standard deviation) in these values is plus or minus 0.0006 for n sub zero and plus or minus 0.0009 for n sub e. The birefringence calculated from these expressions is about 20% less than previously published values.

  14. figure1.nc

    EPA Pesticide Factsheets

    NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).

  15. Toddle temporal-spatial deviation index: Assessment of pediatric gait.

    PubMed

    Cahill-Rowley, Katelyn; Rose, Jessica

    2016-09-01

    This research aims to develop a gait index for use in the pediatric clinic as well as research, that quantifies gait deviation in 18-22 month-old children: the Toddle Temporal-spatial Deviation Index (Toddle TDI). 81 preterm children (≤32 weeks) with very-low-birth-weights (≤1500g) and 42 full-term TD children aged 18-22 months, adjusted for prematurity, walked on a pressure-sensitive mat. Preterm children were administered the Bayley Scales of Infant Development-3rd Edition (BSID-III). Principle component analysis of TD children's temporal-spatial gait parameters quantified raw gait deviation from typical, normalized to an average(standard deviation) Toddle TDI score of 100(10), and calculated for all participants. The Toddle TDI was significantly lower for preterm versus TD children (86 vs. 100, p=0.003), and lower in preterm children with <85 vs. ≥85 BSID-III motor composite scores (66 vs. 89, p=0.004). The Toddle TDI, which by design plateaus at typical average (BSID-III gross motor 8-12), correlated with BSID-III gross motor (r=0.60, p<0.001) and not fine motor (r=0.08, p=0.65) in preterm children with gross motor scores ≤8, suggesting sensitivity to gross motor development. The Toddle TDI demonstrated sensitivity and specificity to gross motor function in very-low-birth-weight preterm children aged 18-22 months, and has been potential as an easily-administered, revealing clinical gait metric. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Physical, chemical, and biological data for selected streams in Chester County, Pennsylvania, 1969-80

    USGS Publications Warehouse

    Moore, C.R.

    1989-01-01

    This report presents physical, chemical, and biological data collected at 50 sampling sites on selected streams in Chester County, Pennsylvania from 1969 to 1980. The physical data consist of air and water temperature, stream discharge, suspended sediment, pH, specific conductance, and dissolved oxygen. The chemical data consist of laboratory determinations of total nutrients, major ions, and trace metals. The biological data consist of total coliform, fecal coliform, and fecal streptococcus bacteriological analyses, and benthicmacroinvertebrate population analyses. Brillouin's diversity index, maximum diversity, minimum diversity, and evenness for each sample, and median and mean Brilloiuin's diversity index, standard deviation, and standard error of the mean were calculated for the benthic-macroinvertebrate data for each site.

  17. PROSPECTIVE EVALUATION OF VISUAL ACUITY AGREEMENT BETWEEN STANDARD EARLY TREATMENT DIABETIC RETINOPATHY STUDY CHART AND A HANDHELD EQUIVALENT IN EYES WITH RETINAL PATHOLOGY.

    PubMed

    Rahimy, Ehsan; Reddy, Sahitya; DeCroos, Francis Char; Khan, M Ali; Boyer, David S; Gupta, Omesh P; Regillo, Carl D; Haller, Julia A

    2015-08-01

    To evaluate the visual acuity agreement between a standard back-illuminated Early Treatment Diabetic Retinopathy Study (ETDRS) chart and a handheld internally illuminated ETDRS chart. Two-center prospective study. Seventy patients (134 eyes) with retinal pathology were enrolled between October 2012 and August 2013. Visual acuity was measured using both the ETDRS chart and the handheld device by masked independent examiners after best protocol refraction. Examination was performed in the same room under identical illumination and testing conditions. The mean number of letters seen was 63.0 (standard deviation: 19.8 letters) and 61.2 letters (standard deviation: 19.1 letters) for the ETDRS chart and handheld device, respectively. Mean difference per eye between the ETDRS and handheld device was 1.8 letters. A correlation coefficient (r) of 0.95 demonstrated a positive linear correlation between ETDRS chart and handheld device measured acuities. Intraclass correlation coefficient was performed to assess the reproducibility of the measurements made by different observers measuring the same quantity and was calculated to be 0.95 (95% confidence interval: 0.93-0.96). Agreement was independent of retinal disease. The strong correlation between measured visual acuity using the ETDRS and handheld equivalent suggests that they may be used interchangeably, with accurate measurements. Potential benefits of this device include convenience and portability, as well as the ability to assess ETDRS visual acuity without a dedicated testing lane.

  18. The performance of single and multi-collector ICP-MS instruments for fast and reliable 34S/32S isotope ratio measurements†

    PubMed Central

    Pröfrock, Daniel; Irrgeher, Johanna; Prohaska, Thomas

    2016-01-01

    The performance and validation characteristics of different single collector inductively coupled plasma mass spectrometers based on different technical principles (ICP-SFMS, ICP-QMS in reaction and collision modes, and ICP-MS/MS) were evaluated in comparison to the performance of MC ICP-MS for fast and reliable S isotope ratio measurements. The validation included the determination of LOD, BEC, measurement repeatability, within-lab reproducibility and deviation from certified values as well as a study on instrumental isotopic fractionation (IIF) and the calculation of the combined standard measurement uncertainty. Different approaches of correction for IIF applying external intra-elemental IIF correction (aka standard-sample bracketing) using certified S reference materials and internal inter-elemental IIF (aka internal standardization) correction using Si isotope ratios in MC ICP-MS are explained and compared. The resulting combined standard uncertainties of examined ICP-QMS systems were not better than 0.3–0.5% (uc,rel), which is in general insufficient to differentiate natural S isotope variations. Although the performance of the single collector ICP-SFMS is better (single measurement uc,rel = 0.08%), the measurement reproducibility (>0.2%) is the major limit of this system and leaves room for improvement. MC ICP-MS operated in the edge mass resolution mode, applying bracketing for correction of IIF, provided isotope ratio values with the highest quality (relative combined measurement uncertainty: 0.02%; deviation from the certified value: <0.002%). PMID:27812369

  19. The performance of single and multi-collector ICP-MS instruments for fast and reliable 34S/32S isotope ratio measurements.

    PubMed

    Hanousek, Ondrej; Brunner, Marion; Pröfrock, Daniel; Irrgeher, Johanna; Prohaska, Thomas

    2016-11-14

    The performance and validation characteristics of different single collector inductively coupled plasma mass spectrometers based on different technical principles (ICP-SFMS, ICP-QMS in reaction and collision modes, and ICP-MS/MS) were evaluated in comparison to the performance of MC ICP-MS for fast and reliable S isotope ratio measurements. The validation included the determination of LOD, BEC, measurement repeatability, within-lab reproducibility and deviation from certified values as well as a study on instrumental isotopic fractionation (IIF) and the calculation of the combined standard measurement uncertainty. Different approaches of correction for IIF applying external intra-elemental IIF correction (aka standard-sample bracketing) using certified S reference materials and internal inter-elemental IIF (aka internal standardization) correction using Si isotope ratios in MC ICP-MS are explained and compared. The resulting combined standard uncertainties of examined ICP-QMS systems were not better than 0.3-0.5% ( u c,rel ), which is in general insufficient to differentiate natural S isotope variations. Although the performance of the single collector ICP-SFMS is better (single measurement u c,rel = 0.08%), the measurement reproducibility (>0.2%) is the major limit of this system and leaves room for improvement. MC ICP-MS operated in the edge mass resolution mode, applying bracketing for correction of IIF, provided isotope ratio values with the highest quality (relative combined measurement uncertainty: 0.02%; deviation from the certified value: <0.002%).

  20. 75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-01

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...

  1. 78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...

  2. Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions

    DTIC Science & Technology

    1981-02-01

    the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for

  3. Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China

    NASA Astrophysics Data System (ADS)

    Zhang, E.; Yin, X.

    2017-12-01

    One of the most challenging steps in implementing analysis of virtual water content (VWC) of agricultural crops is how to properly assess the volume of consumptive water use (CWU) for crop production. In practice, CWU is considered equivalent to the crop evapotranspiration (ETc). Following the crop coefficient method, ETc can be calculated under standard or non-standard conditions by multiplying the reference evapotranspiration (ET0) by one or a few coefficients. However, when current crop growing conditions deviate from standard conditions, accurately determining the coefficients under non-standard conditions remains to be a complicated process and requires lots of field experimental data. Based on regional surface water-energy balance, this research integrates the Budyko framework into the traditional crop coefficient approach to simplify the coefficients determination. This new method enables us to assess the volume of agricultural VWC only based on some hydrometeorological data and agricultural statistic data in regional scale. To demonstrate the new method, we apply it to the Shijiazhuang Plain, which is an agricultural irrigation area in the North China Plain. The VWC of winter wheat and summer maize is calculated and we further subdivide VWC into blue and green water components. Compared with previous studies in this study area, VWC calculated by the Budyko-based crop coefficient approach uses less data and agrees well with some of the previous research. It shows that this new method may serve as a more convenient tool for assessing VWC.

  4. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)

    1982-01-01

    Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.

  5. Two-dimensional and Doppler echocardiographic findings in healthy non-sedated red-eared slider terrapins (Trachemys scripta elegans).

    PubMed

    Poser, H; Russello, G; Zanella, A; Bellini, L; Gelli, D

    2011-12-01

    Echocardiographic evaluation was performed in six healthy young adult non-sedated terrapins (Trachemys scripta elegans). The best imaging quality was obtained through the right cervical window. Base-apex inflow and outflow views were recorded, ventricular size, ventricular wall thickness and ventricular outflow tract were measured, and fractional shortening was calculated. Pulsed-wave Doppler interrogation enabled the diastolic biphasic atrio-ventricular flow and the systolic ventricular outflow patterns to be recorded. The following Doppler-derived functional parameters were calculated: early diastolic (E) and late diastolic (A) wave peak velocities, E/A ratio, ventricular outflow systolic peak and mean velocities and gradients, Velocity-Time Integral, acceleration and deceleration times, and Ejection Time. For each parameter the mean, standard deviation and 95% confidence interval were calculated. Echocardiography resulted as a useful and easy-to-perform diagnostic tool in this poorly known species that presents difficulties during evaluation.

  6. Ewald Electrostatics for Mixtures of Point and Continuous Line Charges.

    PubMed

    Antila, Hanne S; Tassel, Paul R Van; Sammalkorpi, Maria

    2015-10-15

    Many charged macro- or supramolecular systems, such as DNA, are approximately rod-shaped and, to the lowest order, may be treated as continuous line charges. However, the standard method used to calculate electrostatics in molecular simulation, the Ewald summation, is designed to treat systems of point charges. We extend the Ewald concept to a hybrid system containing both point charges and continuous line charges. We find the calculated force between a point charge and (i) a continuous line charge and (ii) a discrete line charge consisting of uniformly spaced point charges to be numerically equivalent when the separation greatly exceeds the discretization length. At shorter separations, discretization induces deviations in the force and energy, and point charge-point charge correlation effects. Because significant computational savings are also possible, the continuous line charge Ewald method presented here offers the possibility of accurate and efficient electrostatic calculations.

  7. A simple method for the fast calculation of charge redistribution of solutes in an implicit solvent model

    NASA Astrophysics Data System (ADS)

    Dias, L. G.; Shimizu, K.; Farah, J. P. S.; Chaimovich, H.

    2002-09-01

    We propose and demonstrate the usefulness of a method, defined as generalized Born electronegativity equalization method (GBEEM) to estimate solvent-induced charge redistribution. The charges obtained by GBEEM, in a representative series of small organic molecules, were compared to PM3-CM1 charges in vacuum and in water. Linear regressions with appropriate correlation coefficients and standard deviations between GBEEM and PM3-CM1 methods were obtained ( R=0.94,SD=0.15, Ftest=234, N=32, in vacuum; R=0.94,SD=0.16, Ftest=218, N=29, in water). In order to test the GBEEM response when intermolecular interactions are involved we calculated a water dimer in dielectric water using both GBEEM and PM3-CM1 and the results were similar. Hence, the method developed here is comparable to established calculation methods.

  8. The golden ratio of nasal width to nasal bone length.

    PubMed

    Goynumer, G; Yayla, M; Durukan, B; Wetherilt, L

    2011-01-01

    To calculate the ratio of fetal nasal width over nasal bone length at 14-39 weeks' gestation in Caucasian women. Fetal nasal bone length and nasal width at 14-39 weeks' gestation were measured in 532 normal fetuses. The mean and standard deviations of fetal nasal bone length, nasal width and their ratio to one another were calculated in normal fetuses according to the gestational age to establish normal values. A positive and linear correlation was detected between the nasal bone length and the gestational week, as between the nasal width and the gestational week. No linear growth pattern was found between the gestational week and the ratio of nasal width to nasal bone length, nearly equal to phi, throughout gestation. The ratio of nasal width to nasal bone length, approximately equal to phi, can be calculated at 14-38 weeks' gestation. This might be useful in evaluating fetal abnormalities.

  9. Application of simple all-sky imagers for the estimation of aerosol optical depth

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Andreas; Tzoumanikas, Panagiotis; Nikitidou, Efterpi; Salamalikis, Vasileios; Wilbert, Stefan; Prahl, Christoph

    2017-06-01

    Aerosol optical depth is a key atmospheric constituent for direct normal irradiance calculations at concentrating solar power plants. However, aerosol optical depth is typically not measured at the solar plants for financial reasons. With the recent introduction of all-sky imagers for the nowcasting of direct normal irradiance at the plants a new instrument is available which can be used for the determination of aerosol optical depth at different wavelengths. In this study, we are based on Red, Green and Blue intensities/radiances and calculations of the saturated area around the Sun, both derived from all-sky images taken with a low-cost surveillance camera at the Plataforma Solar de Almeria, Spain. The aerosol optical depth at 440, 500 and 675nm is calculated. The results are compared with collocated aerosol optical measurements and the mean/median difference and standard deviation are less than 0.01 and 0.03 respectively at all wavelengths.

  10. A better way to teach knot tying: a randomized controlled trial comparing the kinesthetic and traditional methods.

    PubMed

    Huang, Emily; Chern, Hueylan; O'Sullivan, Patricia; Cook, Brian; McDonald, Erik; Palmer, Barnard; Liu, Terrence; Kim, Edward

    2014-10-01

    Knot tying is a fundamental and crucial surgical skill. We developed a kinesthetic pedagogical approach that increases precision and economy of motion by explicitly teaching suture-handling maneuvers and studied its effects on novice performance. Seventy-four first-year medical students were randomized to learn knot tying via either the traditional or the novel "kinesthetic" method. After 1 week of independent practice, students were videotaped performing 4 tying tasks. Three raters scored deidentified videos using a validated visual analog scale. The groups were compared using analysis of covariance with practice knots as a covariate and visual analog scale score (range, 0 to 100) as the dependent variable. Partial eta-square was calculated to indicate effect size. Overall rater reliability was .92. The kinesthetic group scored significantly higher than the traditional group for individual tasks and overall, controlling for practice (all P < .004). The kinesthetic overall mean was 64.15 (standard deviation = 16.72) vs traditional 46.31 (standard deviation = 16.20; P < .001; effect size = .28). For novices, emphasizing kinesthetic suture handling substantively improved performance on knot tying. We believe this effect can be extrapolated to more complex surgical skills. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.

    PubMed

    Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G

    2012-11-13

    The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.

  12. Measurements of n-p correlations in the reaction of relativistic neon with uranium

    NASA Technical Reports Server (NTRS)

    Frankel, K.; Schimmerling, W.; Rasmussen, J. O.; Crowe, K. M.; Bistirlich, J.; Bowman, H.; Hashimoto, O.; Murphy, D. L.; Ridout, J.; Sullivan, J. P.; hide

    1986-01-01

    We report a preliminary measurement of coincident neutron-proton pairs emitted at 45 degrees in the interaction of 400, 530, and 650 MeV/A neon beams incident on uranium. Charged particles were identified by time of flight and momentum, as determined in a magnetic spectrometer. Neutral particles were detected using a thick plastic scintillator, and their time of flight was measured between an entrance scintillator, triggered by a charged particle, and the neutron detector. The scatter plots and contour plots of neutron momentum vs. proton momentum appear to show a slight correlation ridge above an uncorrelated background. The projections of this plane on the n-p momentum difference axis are essentially flat, showing a one standard deviation enhancement for each of the three beams energies. At each beam energy, the calculated momentum correlation function for the neutron-proton pairs is enhanced near zero neutron-proton momentum difference by approximately one standard deviation over the expected value for no correlation. This enhancement is expected to occur as a consequence of the attractive final state interaction between the neutron and proton (i.e., virtual or "singlet" deuterons). The implications of these measurements are discussed.

  13. Sampling for mercury at subnanogram per litre concentrations for load estimation in rivers

    USGS Publications Warehouse

    Colman, J.A.; Breault, R.F.

    2000-01-01

    Estimation of constituent loads in streams requires collection of stream samples that are representative of constituent concentrations, that is, composites of isokinetic multiple verticals collected along a stream transect. An all-Teflon isokinetic sampler (DH-81) cleaned in 75??C, 4 N HCl was tested using blank, split, and replicate samples to assess systematic and random sample contamination by mercury species. Mean mercury concentrations in field-equipment blanks were low: 0.135 ng??L-1 for total mercury (??Hg) and 0.0086 ng??L-1 for monomethyl mercury (MeHg). Mean square errors (MSE) for ??Hg and MeHg duplicate samples collected at eight sampling stations were not statistically different from MSE of samples split in the laboratory, which represent the analytical and splitting error. Low fieldblank concentrations and statistically equal duplicate- and split-sample MSE values indicate that no measurable contamination was occurring during sampling. Standard deviations associated with example mercury load estimations were four to five times larger, on a relative basis, than standard deviations calculated from duplicate samples, indicating that error of the load determination was primarily a function of the loading model used, not of sampling or analytical methods.

  14. Generation of random microstructures and prediction of sound velocity and absorption for open foams with spherical pores.

    PubMed

    Zieliński, Tomasz G

    2015-04-01

    This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.

  15. Uncertainty propagation for SPECT/CT-based renal dosimetry in 177Lu peptide receptor radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina

    2015-11-01

    A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.

  16. Prospective surveillance of semen quality in the workplace

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schenker, M.B.; Samuels, S.J.; Perkins, C.

    We performed a prospective surveillance of semen quality among workers in the plant where 1,2-dibromo-3-chloropropane was first recognized as an occupational cause of impaired semen quality and of infertility. All male employees of the Agricultural Chemical Division were required to participate. Ninety-seven workers (92% participation) provided 258 semen samples over the 4 years of the program. Most samples were analyzed at the plant with a mini-laboratory designed for the study. Motility and shape measures were made objectively. Sixty-six subjects (68%) were non-azoospermic. Generalized multiple regression showed no significant predictors for any response, with the exception of the motility measures, whichmore » were reduced with longer times between ejaculation and assay. Between- and within-person standard deviations and correlations were calculated. Comparison of this population with fertile artificial insemination donors (16 men, 498 ejaculates) revealed generally higher ejaculate-to-ejaculate standard deviations in the worker samples. This is probably due to less well controlled conditions of sperm collection in the workplace setting. For cross-sectional studies, one ejaculate per worker is recommended as sufficient; for estimating an individual worker's mean, even three ejaculates may not provide enough precision.« less

  17. Analysis of the landscape complexity and heterogeneity of the Pantanal wetland.

    PubMed

    Miranda, C S; Gamarra, R M; Mioto, C L; Silva, N M; Conceição Filho, A P; Pott, A

    2018-05-01

    This is the first report on analysis of habitat complexity and heterogeneity of the Pantanal wetland. The Pantanal encompasses a peculiar mosaic of environments, being important to evaluate and monitor this area concerning conservation of biodiversity. Our objective was to indirectly measure the habitat complexity and heterogeneity of the mosaic forming the sub-regions of the Pantanal, by means of remote sensing. We obtained free images of Normalized Difference Vegetation Index (NDVI) from the sensor MODIS and calculated the mean value (complexity) and standard deviation (heterogeneity) for each sub-region in the years 2000, 2008 and 2015. The sub-regions of Poconé, Canoeira, Paraguai and Aquidauana presented the highest values of complexity (mean NDVI), between 0.69 and 0.64 in the evaluated years. The highest horizontal heterogeneity (NDVI standard deviation) was observed in the sub-region of Tuiuiú, with values of 0.19 in the years 2000 and 2015, and 0.21 in the year 2008. We concluded that the use of NDVI to estimate landscape parameters is an efficient tool for assessment and monitoring of the complexity and heterogeneity of the Pantanal habitats, applicable in other regions.

  18. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  19. Transport Coefficients from Large Deviation Functions

    NASA Astrophysics Data System (ADS)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  20. Electrochemistry of moexipril: experimental and computational approach and voltammetric determination.

    PubMed

    Taşdemir, Hüdai I; Kiliç, E

    2014-09-01

    The electrochemistry of moexipril (MOE) was studied by electrochemical methods with theoretical calculations performed at B3LYP/6-31 + G (d)//AM1. Cyclic voltammetric studies were carried out based on a reversible and adsorption-controlled reduction peak at -1.35 V on a hanging mercury drop electrode (HMDE). Concurrently irreversible diffusion-controlled oxidation peak at 1.15 V on glassy carbon electrode (GCE) was also employed. Potential values are according to Ag/AgCI, (3.0 M KCI) and measurements were performed in Britton-Robinson buffer of pH 5.5. Tentative electrode mechanisms were proposed according to experimental results and ab-initio calculations. Square-wave adsorptive stripping voltammetric methods have been developed and validated for quantification of MOE in pharmaceutical preparations. Linear working range was established as 0.03-1.35 microM for HMDE and 0.2-20.0 microM for GCE. Limit of quantification (LOQ) was calculated to be 0.032 and 0.47 microM for HMDE and GCE, respectively. Methods were successfully applied to assay the drug in tablets by calibration and standard addition methods with good recoveries between 97.1% and 106.2% having relative standard deviation less than 10%.

  1. On Teaching about the Coefficient of Variation in Introductory Statistics Courses

    ERIC Educational Resources Information Center

    Trafimow, David

    2014-01-01

    The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.

  2. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  3. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Shantaram, S. Pai; Gyekenyesi, John P.

    1989-01-01

    The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  4. The development and investigation of a prototype three-dimensional compensator for whole brain radiation therapy

    NASA Astrophysics Data System (ADS)

    Keall, Paul; Arief, Isti; Shamas, Sofia; Weiss, Elisabeth; Castle, Steven

    2008-05-01

    Whole brain radiation therapy (WBRT) is the standard treatment for patients with brain metastases, and is often used in conjunction with stereotactic radiotherapy for patients with a limited number of brain metastases, as well as prophylactic cranial irradiation. The use of open fields (conventionally used for WBRT) leads to higher doses to the brain periphery if dose is prescribed to the brain center at the largest lateral radius. These dose variations potentially compromise treatment efficacy and translate to increased side effects. The goal of this research was to design and construct a 3D 'brain wedge' to compensate dose heterogeneities in WBRT. Radiation transport theory was invoked to calculate the desired shape of a wedge to achieve a uniform dose distribution at the sagittal plane for an ellipsoid irradiated medium. The calculations yielded a smooth 3D wedge design to account for the missing tissue at the peripheral areas of the brain. A wedge was machined based on the calculation results. Three ellipsoid phantoms, spanning the mean and ± two standard deviations from the mean cranial dimensions were constructed, representing 95% of the adult population. Film was placed at the sagittal plane for each of the three phantoms and irradiated with 6 MV photons, with the wedge in place. Sagittal plane isodose plots for the three phantoms demonstrated the feasibility of this wedge to create a homogeneous distribution with similar results observed for the three phantom sizes, indicating that a single wedge may be sufficient to cover 95% of the adult population. The sagittal dose is a reasonable estimate of the off-axis dose for whole brain radiation therapy. Comparing the dose with and without the wedge the average minimum dose was higher (90% versus 86%), the maximum dose was lower (107% versus 113%) and the dose variation was lower (one standard deviation 2.7% versus 4.6%). In summary, a simple and effective 3D wedge for whole brain radiotherapy has been developed. The wedge gives a more uniform dose distribution than commonly used techniques. Further development and shape optimization may be necessary prior to clinical implementation.

  5. Excellent reliability of the Hamilton Depression Rating Scale (HDRS-21) in Indonesia after training.

    PubMed

    Istriana, Erita; Kurnia, Ade; Weijers, Annelies; Hidayat, Teddy; Pinxten, Lucas; de Jong, Cor; Schellekens, Arnt

    2013-09-01

    The Hamilton Depression Rating Scale (HDRS) is the most widely used depression rating scale worldwide. Reliability of HDRS has been reported mainly from Western countries. The current study tested the reliability of HDRS ratings among psychiatric residents in Indonesia, before and after HDRS training. The hypotheses were that: (i) prior to the training reliability of HDRS ratings is poor; and (ii) HDRS training can improve reliability of HDRS ratings to excellent levels. Furthermore, we explored cultural validity at item level. Videotaped HDRS interviews were rated by 30 psychiatric residents before and after 1 day of HDRS training. Based on a gold standard rating, percentage correct ratings and deviation from the standard were calculated. Correct ratings increased from 83% to 99% at item level and from 70% to 100% for the total rating. The average deviation from the gold standard rating improved from 0.07 to 0.02 at item level and from 2.97 to 0.46 for the total rating. HDRS assessment by psychiatric trainees in Indonesia without prior training is unreliable. A short, evidence-based HDRS training improves reliability to near perfect levels. The outlined training program could serve as a template for HDRS trainings. HDRS items that may be less valid for assessment of depression severity in Indonesia are discussed. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  6. Informative Bayesian Type A uncertainty evaluation, especially applicable to a small number of observations

    NASA Astrophysics Data System (ADS)

    Cox, M.; Shirono, K.

    2017-10-01

    A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.

  7. Evaluation of measurement uncertainty of glucose in clinical chemistry.

    PubMed

    Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y

    2007-04-01

    The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.

  8. Flammability of gas mixtures. Part 1: fire potential.

    PubMed

    Schröder, Volkmar; Molnarne, Maria

    2005-05-20

    International and European dangerous substances and dangerous goods regulations refer to the standard ISO 10156 (1996). This standard includes a test method and a calculation procedure for the determination of the flammability of gases and gas mixtures in air. The substance indices for the calculation, the so called "Tci values", which characterise the fire potential, are provided as well. These ISO Tci values are derived from explosion diagrams of older literature sources which do not take into account the test method and the test apparatus. However, since the explosion limits are influenced by apparatus parameters, the Tci values and lower explosion limits, given by the ISO tables, are inconsistent with those measured according to the test method of the same standard. In consequence, applying the ISO Tci values can result in wrong classifications. In this paper internationally accepted explosion limit test methods were evaluated and Tci values were derived from explosion diagrams. Therefore, an "open vessel" method with flame propagation criterion was favoured. These values were compared with the Tci values listed in ISO 10156. In most cases, significant deviations were found. A detailed study about the influence of inert gases on flammability is the objective of Part 2.

  9. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  10. Dietary Toxicity Thresholds and Ecological Risks for Birds and Mammals Based on Species Sensitivity Distributions.

    PubMed

    Korsman, John C; Schipper, Aafke M; Hendriks, A Jan

    2016-10-04

    Species sensitivity distributions (SSDs) are commonly used in regulatory procedures and ecological risk assessments. Yet, most toxicity threshold and risk assessment studies are based on invertebrates and fish. In the present study, no observed effect concentrations (NOECs) specific to birds and mammals were used to derive SSDs and corresponding hazardous concentrations for 5% of the species (HC5 values). This was done for 41 individual substances as well as for subsets of substances aggregated based on their toxic Mode of Action (MoA). In addition, potential differences in SSD parameters (mean and standard deviation) were investigated in relation to MoA and end point (growth, reproduction, and survival). The means of neurotoxic and respirotoxic compounds were significantly lower than those of narcotics, whereas no differences were found between end points. The standard deviations of the SSDs were similar across MoA's and end points. Finally, the SSDs obtained were used in a case study by calculating Ecological Risks (ER) and multisubstance Potentially Affected Fractions of species (msPAF) based on 19 chemicals in 10 Northwestern European estuaries and coastal areas. The assessment showed that the risks were all below 2.6 × 10 -2 . However, the calculated risks underestimate the actual risks of chemicals in these areas because the potential impacts of substances that were not measured in the field or for which no SSD was available were not included in the risk assessment. The SSDs obtained can be used in regulatory procedures and for assessing the impacts of contaminants on birds and mammals from fish contaminants monitoring programs.

  11. The composition of intern work while on call.

    PubMed

    Fletcher, Kathlyn E; Visotcky, Alexis M; Slagle, Jason M; Tarima, Sergey; Weinger, Matthew B; Schapira, Marilyn M

    2012-11-01

    The work of house staff is being increasingly scrutinized as duty hours continue to be restricted. To describe the distribution of work performed by internal medicine interns while on call. Prospective time motion study on general internal medicine wards at a VA hospital affiliated with a tertiary care medical center and internal medicine residency program. Internal medicine interns. Trained observers followed interns during a "call" day. The observers continuously recorded the tasks performed by interns, using customized task analysis software. We measured the amount of time spent on each task. We calculated means and standard deviations for the amount of time spent on six categories of tasks: clinical computer work (e.g., writing orders and notes), non-patient communication, direct patient care (work done at the bedside), downtime, transit and teaching/learning. We also calculated means and standard deviations for time spent on specific tasks within each category. We compared the amount of time spent on the top three categories using analysis of variance. The largest proportion of intern time was spent in clinical computer work (40 %). Thirty percent of time was spent on non-patient communication. Only 12 % of intern time was spent at the bedside. Downtime activities, transit and teaching/learning accounted for 11 %, 5 % and 2 % of intern time, respectively. Our results suggest that during on call periods, relatively small amounts of time are spent on direct patient care and teaching/learning activities. As intern duty hours continue to decrease, attention should be directed towards preserving time with patients and increasing time in education.

  12. [Polar S810 as an alternative resource to the use of the electrocardiogram in the 4-second exercise test].

    PubMed

    Pimentel, Alan Santos; Alves, Eduardo da Silva; Alvim, Rafael de Oliveira; Nunes, Rogério Tasca; Costa, Carlos Magno Amaral; Lovisi, Júlio Cesar Moraes; Perrout de Lima, Jorge Roberto

    2010-05-01

    The 4-second exercise test (T4s) evaluates the cardiac vagal tone during the initial heart rate (HR) transient at sudden dynamic exercise, through the identification of the cardiac vagal index (CVI) obtained from the electrocardiogram (ECG). To evaluate the use of the Polar S810 heart rate monitor (HRM) as an alternative resource to the use of the electrocardiogram in the 4-second exercise test. In this study, 49 male individuals (25 +/- 20 years, 176 +/-12 cm, 74 +/- 6 kg) underwent the 4-second exercise test. The RR intervals were recorded simultaneously by ECG and HRM. We calculated the mean and the standard deviation of the last RR interval of the pre-exercise period, or of the first RR interval of the exercise period, whichever was longer (RRB), of the shortest RR interval of the exercise period (RRC), and of the CVI obtained by ECG and HRM. We used the Student t-test for dependent samples (p < or 0.05) to test the significance of the differences between means. To identify the correlation between the ECG and the HRM, we used the linear regression to calculate the Pearson's correlation coefficient and the strategy proposed by Bland and Altman. Linear regression showed r(2) of 0.9999 for RRB, 0.9997 for RRC, and 0.9996 for CVI. Bland e Altman strategy presented standard deviation of 0.92 ms for RRB, 0.86 ms for RRC, and 0.002 for CVI. Polar S810 HRM was more efficient in the application of T4s compared to the ECG.

  13. Improved ambiguity resolution for URTK with dynamic atmosphere constraints

    NASA Astrophysics Data System (ADS)

    Tang, Weiming; Liu, Wenjian; Zou, Xuan; Li, Zongnan; Chen, Liang; Deng, Chenlong; Shi, Chuang

    2016-12-01

    Raw observation processing method with prior knowledge of ionospheric delay could strengthen the ambiguity resolution (AR), but it does not make full use of the relatively longer wavelength of wide-lane (WL) observation. Furthermore, the accuracy of calculated atmospheric delays from the regional augmentation information has quite different in quality, while the atmospheric constraint used in the current methods is usually set to an empirical value. A proper constraint, which matches the accuracy of calculated atmospheric delays, can most effectively compensate the residual systematic biases caused by large inter-station distances. Therefore, the standard deviation of the residual atmospheric parameters should be fine-tuned. This paper presents an atmosphere-constrained AR method for undifferenced network RTK (URTK) rover, whose ambiguities are sequentially fixed according to their wavelengths. Furthermore, this research systematically analyzes the residual atmospheric error and finds that it mainly varies along the positional relationship between the rover and the chosen reference stations. More importantly, its ionospheric part of certain location will also be cyclically influenced every day. Therefore, the standard deviation of residual ionospheric error can be modeled by a daily repeated cosine or other functions with the help of data one day before, and applied by rovers as pseudo-observation. With the data collected at 29 stations from a continuously operating reference station network in Guangdong Province (GDCORS) in China, the efficiency of the proposed approach is confirmed by improving the success and error rates of AR for 10-20 % compared to that of the WL-L1-IF one, as well as making much better positioning accuracy.

  14. Histogram-based quantitative evaluation of endobronchial ultrasonography images of peripheral pulmonary lesion.

    PubMed

    Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi

    2015-01-01

    Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.

  15. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  16. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  17. Estimation of Hammett sigma constants of substituted benzenes through accurate density-functional calculation of core-electron binding energy shifts

    NASA Astrophysics Data System (ADS)

    Takahata, Yuji; Chong, Delano P.

    For substituted benzenes such as (p-F-C6H4-Z), Linderberg et al. 1 demonstrated the validity of an equation similar to: ΔCEBE ≈ κσ, where ΔCEBE is the difference in core-electron binding energies (CEBEs) of the fluorinated carbon in p-F-C6H4-Z and that in FC6H5, the parameter κ is a function of the type of reaction, and σ is the Hammett substituent (σ) constant. In this work, CEBEs of ring carbon atoms for a series of para disubstituted molecules p-F-C6H4-Z were first calculated using Density Functional Theory (DFT) with the scheme ΔEKS (PW86-PW91)/TZP+Crel//HF/6-31G*. An average absolute deviation of 0.13 eV from experiment was obtained for the CEBEs. Then we performed a linear regression analysis in the form of Y = A+B*X for a plot of Hammett σp constants against calculated shifts ΔCEBEs (in eV) for the fluorinated carbon. The results were: A = -0.08 and B = 1.01, with correlation coefficient R = 0.973, standard deviation = 0.12, and P < 0.0001. The intercept A of the fitted line, close to zero, shows that the Hammett σp constant is proportional to the calculated ΔCEBEs. On the other hand, the slope B of the straight line gives an estimate of the parameter κ. Similar statistical correlations were obtained for the carbon atoms ortho and meta to the substituent Z.

  18. What makes children with cerebral palsy vulnerable to malnutrition? Findings from the Bangladesh cerebral palsy register (BCPR).

    PubMed

    Jahan, Israt; Muhit, Mohammad; Karim, Tasneem; Smithers-Sheedy, Hayley; Novak, Iona; Jones, Cheryl; Badawi, Nadia; Khandaker, Gulam

    2018-04-16

    To assess the nutritional status and underlying risk factors for malnutrition among children with cerebral palsy in rural Bangladesh. We used data from the Bangladesh Cerebral Palsy Register; a prospective population based surveillance of children with cerebral palsy aged 0-18 years in a rural subdistrict of Bangladesh (i.e., Shahjadpur). Socio-demographic, clinical and anthropometric measurements were collected using Bangladesh Cerebral Palsy Register record form. Z scores were calculated using World Health Organization Anthro and World Health Organization AnthroPlus software. A total of 726 children with cerebral palsy were registered into the Bangladesh Cerebral Palsy Register (mean age 7.6 years, standard deviation 4.5, 38.1% female) between January 2015 and December 2016. More than two-third of children were underweight (70.0%) and stunted (73.1%). Mean z score for weight for age, height for age and weight for height were -2.8 (standard deviation 1.8), -3.1 (standard deviation 2.2) and -1.2 (standard deviation 2.3) respectively. Moderate to severe undernutrition (i.e., both underweight and stunting) were significantly associated with age, monthly family income, gross motor functional classification system and neurological type of cerebral palsy. The burden of undernutrition is high among children with cerebral palsy in rural Bangladesh which is augmented by both poverty and clinical severity. Enhancing clinical nutritional services for children with cerebral palsy should be a public health priority in Bangladesh. Implications for Rehabilitation Population-based surveillance data on nutritional status of children with cerebral palsy in Bangladesh indicates substantially high burden of malnutrition among children with CP in rural Bangladesh. Children with severe form of cerebral palsy, for example, higher Gross Motor Function Classification System (GMFCS) level, tri/quadriplegic cerebral palsy presents the highest proportion of severe malnutrition; hence, these vulnerable groups should be focused in designing nutrition intervention and rehabilitation programs. Disability inclusive and focused nutrition intervention programme need to be kept as priority in national nutrition policies and nutrition action plans specially in low- and middle-income countries. Community-based management of malnutrition has the potential to overcome this poor nutritional scenario of children with disability (i.e., cerebral palsy). The global leaders such as World Health Organization, national and international organizations should take this in account and conduct further research to develop nutritional guidelines for this vulnerable group of population.

  19. Comparison of spectral estimators for characterizing fractionated atrial electrograms

    PubMed Central

    2013-01-01

    Background Complex fractionated atrial electrograms (CFAE) acquired during atrial fibrillation (AF) are commonly assessed using the discrete Fourier transform (DFT), but this can lead to inaccuracy. In this study, spectral estimators derived by averaging the autocorrelation function at lags were compared to the DFT. Method Bipolar CFAE of at least 16 s duration were obtained from pulmonary vein ostia and left atrial free wall sites (9 paroxysmal and 10 persistent AF patients). Power spectra were computed using the DFT and three other methods: 1. a novel spectral estimator based on signal averaging (NSE), 2. the NSE with harmonic removal (NSH), and 3. the autocorrelation function average at lags (AFA). Three spectral parameters were calculated: 1. the largest fundamental spectral peak, known as the dominant frequency (DF), 2. the DF amplitude (DA), and 3. the mean spectral profile (MP), which quantifies noise floor level. For each spectral estimator and parameter, the significance of the difference between paroxysmal and persistent AF was determined. Results For all estimators, mean DA and mean DF values were higher in persistent AF, while the mean MP value was higher in paroxysmal AF. The differences in means between paroxysmals and persistents were highly significant for 3/3 NSE and NSH measurements and for 2/3 DFT and AFA measurements (p<0.001). For all estimators, the standard deviation in DA and MP values were higher in persistent AF, while the standard deviation in DF value was higher in paroxysmal AF. Differences in standard deviations between paroxysmals and persistents were highly significant in 2/3 NSE and NSH measurements, in 1/3 AFA measurements, and in 0/3 DFT measurements. Conclusions Measurements made from all four spectral estimators were in agreement as to whether the means and standard deviations in three spectral parameters were greater in CFAEs acquired from paroxysmal or in persistent AF patients. Since the measurements were consistent, use of two or more of these estimators for power spectral analysis can be assistive to evaluate CFAE more objectively and accurately, which may lead to improved clinical outcome. Since the most significant differences overall were achieved using the NSE and NSH estimators, parameters measured from their spectra will likely be the most useful for detecting and discerning electrophysiologic differences in the AF substrate based upon frequency analysis of CFAE. PMID:23855345

  20. Alternative Internal Standard Calibration of an Indirect Enzymatic Analytical Method for 2-MCPD Fatty Acid Esters.

    PubMed

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Egawa, Yoshitsugu; Fukazawa, Toru; Kitta, Tadashi; Miyashita, Takashi; Nezu, Toru; Nohara, Hidenori; Sano, Takashi; Takahashi, Yukinari; Taniguchi, Hideji; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2017-06-01

    An indirect enzymatic analysis method for the quantification of fatty acid esters of 2-/3-monochloro-1,2-propanediol (2/3-MCPD) and glycidol was developed, using the deuterated internal standard of each free-form component. A statistical method for calibration and quantification of 2-MCPD-d 5 , which is difficult to obtain, is substituted by 3-MCPD-d 5 used for calculation of 3-MCPD. Using data from a previous collaborative study, the current method for the determination of 2-MCPD content using 2-MCPD-d 5 was compared to three alternative new methods using 3-MCPD-d 5 . The regression analysis showed that the alternative methods were unbiased compared to the current method. The relative standard deviation (RSD R ) among the testing laboratories was ≤ 15% and the Horwitz ratio was ≤ 1.0, a satisfactory value.

  1. Prevalence of alterations in the characteristics of smile symmetry in an adult population from southern Europe.

    PubMed

    Jiménez-Castellanos, Emilio; Orozco-Varo, Ana; Arroyo-Cruz, Gema; Iglesias-Linares, Alejandro

    2016-06-01

    Deviation from the facial midline and inclination of the dental midline or occlusal plane has been described as extremely influential in the layperson's perceptions of the overall esthetics of the smile. The purpose of this study was to determine the prevalence of deviation from the facial midline and inclination of the dental midline or occlusal plane in a selected sample. White participants from a European population (N=158; 93 women, 65 men) who met specific inclusion criteria were selected for the present study. Standardized 1:1 scale frontal photographs were made, and 3 variables of all participants were measured: midline deviation, midline inclination, and inclination of the occlusal plane. Software was used to measure midline deviation and inclination, taking the bipupillary line and the facial midline as references. Tests for normality of the sample were explored and descriptive statistics (means ±SD) were calculated. The chi-square test was used to evaluate differences in midline deviation, midline inclination, and occlusion plane (α=.05) RESULTS: Frequencies of midline deviation (>2 mm), midline inclination (>3.5 degrees), and occlusal plane inclination (>2 degrees) were 31.64% (mean 2.7±1.23 mm), 10.75% (mean 7.9 degrees ±3.57), and 25.9% (mean 9.07 degrees ±3.16), respectively. No statistically significant differences (P>.05) were found between sex and any of the esthetic smile values. The incidence of alterations with at least 1 altered parameter that affected smile esthetics was 51.9% in a population from southern Europe. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. Method and apparatus for in-situ detection and isolation of aircraft engine faults

    NASA Technical Reports Server (NTRS)

    Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)

    2007-01-01

    A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.

  3. Holographic 3D multi-spot two-photon excitation for fast optical stimulation in brain

    NASA Astrophysics Data System (ADS)

    Takiguchi, Yu; Toyoda, Haruyoshi

    2017-04-01

    We report here a holographic high speed accessing microscope of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. This system is based on holographic multiple beam generation with spatial light modulator, we have demonstrated performance of the holographic excitation efficiency in several in vitro prototype system. 3D weighted iterative Fourier Transform method using the Ewald sphere in consideration of calculation speed has been adopted; multiple locations can be patterned in 3D with single hologram. Standard deviation of intensities of spots are still large due to the aberration of the system and/or hologram calculation, we successfully excited multiple locations of neurons in living mouse brain to monitor the calcium signals.

  4. Frequency modulation television analysis: Distortion analysis

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.; Wong, W. H.

    1973-01-01

    Computer simulation is used to calculate the time-domain waveform of standard T-pulse-and-bar test signal distorted in passing through an FM television system. The simulator includes flat or preemphasized systems and requires specification of the RF predetection filter characteristics. The predetection filters are modeled with frequency-symmetric Chebyshev (0.1-db ripple) and Butterworth filters. The computer was used to calculate distorted output signals for sixty-four different specified systems, and the output waveforms are plotted for all sixty-four. Comparison of the plotted graphs indicates that a Chebyshev predetection filter of four poles causes slightly more signal distortion than a corresponding Butterworth filter and the signal distortion increases as the number of poles increases. An increase in the peak deviation also increases signal distortion. Distortion also increases with the addition of preemphasis.

  5. The retest distribution of the visual field summary index mean deviation is close to normal.

    PubMed

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  6. 10 CFR 961.4 - Deviations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...

  7. 10 CFR 961.4 - Deviations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...

  8. 10 CFR 961.4 - Deviations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...

  9. 10 CFR 961.4 - Deviations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...

  10. 10 CFR 961.4 - Deviations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...

  11. Do health care workforce, population, and service provision significantly contribute to the total health expenditure? An econometric analysis of Serbia.

    PubMed

    Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z

    2016-08-15

    In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.

  12. [Development of ophthalmologic software for handheld devices].

    PubMed

    Grottone, Gustavo Teixeira; Pisa, Ivan Torres; Grottone, João Carlos; Debs, Fernando; Schor, Paulo

    2006-01-01

    The formulas for calculation of intraocular lenses have evolved since the first theoretical formulas by Fyodorov. Among the second generation formulas, the SRK-I formula has a simple calculation, taking into account a calculation that only involved anteroposterior length, IOL constant and average keratometry. With the evolution of those formulas, complexicity increased making the reconfiguration of parameters in special situations impracticable. In this way the production and development of software for such a purpose, can help surgeons to recalculate those values if needed. To idealize, develop and test a Brazilian software for calculation of IOL dioptric power for handheld computers. For the development and programming of software for calculation of IOL, we used PocketC program (OrbWorks Concentrated Software, USA). We compared the results collected from a gold-standard device (Ultrascan/Alcon Labs) with the simulation of 100 fictitious patients, using the same IOL parameters. The results were grouped for ULTRASCAN data and SOFTWARE data. Using SRK/T formula the range of those parameters included a keratometry varying between 35 and 55D, axial length between 20 and 28 mm, IOL constants of 118.7, 118.3 and 115.8. Using Wilcoxon test, it was shown that the groups do not differ (p=0.314). We had a variation in the Ultrascan sample between 11.82 and 27.97. In the tested program sample the variation was practically similar (11.83-27.98). The average of the Ultrascan group was 20.93. The software group had a similar average. The standard deviation of the samples was also similar (4.53). The precision of IOL software for handheld devices was similar to that of the standard devices using the SRK/T formula. The software worked properly, was steady without bugs in tested models of operational system.

  13. MO-FG-CAMPUS-TeP1-01: An Efficient Method of 3D Patient Dose Reconstruction Based On EPID Measurements for Pre-Treatment Patient Specific QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, R; Lee, C; Calvary Mater Newcastle, Newcastle

    Purpose: To demonstrate an efficient and clinically relevant patient specific QA method by reconstructing 3D patient dose from 2D EPID images for IMRT plans. Also to determine the usefulness of 2D QA metrics when assessing 3D patient dose deviations. Methods: Using the method developed by King et al (Med Phys 39(5),2839–2847), EPID images of IMRT fields were acquired in air and converted to dose at 10 cm depth (SAD setup) in a flat virtual water phantom. Each EPID measured dose map was then divided by the corresponding treatment planning system (TPS) dose map calculated with an identical setup, to derivemore » a 2D “error matrix”. For each field, the error matrix was used to adjust the doses along the respective ray lines in the original patient 3D dose. All field doses were combined to derive a reconstructed 3D patient dose for quantitative analysis. A software tool was developed to efficiently implement the entire process and was tested with a variety of IMRT plans for 2D (virtual flat phantom) and 3D (in-patient) QA analysis. Results: The method was tested on 60 IMRT plans. The mean (± standard deviation) 2D gamma (2%,2mm) pass rate (2D-GPR) was 97.4±3.0% and the mean 2D gamma index (2D-GI) was 0.35±0.06. The 3D PTV mean dose deviation was 1.8±0.8%. The analysis showed very weak correlations between both the 2D-GPR and 2D-GI when compared with PTV mean dose deviations (R2=0.3561 and 0.3632 respectively). Conclusion: Our method efficiently calculates 3D patient dose from 2D EPID images, utilising all of the advantages of an EPID-based dosimetry system. In this study, the 2D QA metrics did not predict the 3D patient dose deviation. This tool allows reporting of the 3D volumetric dose parameters thus providing more clinically relevant patient specific QA.« less

  14. The truly remarkable universality of half a standard deviation: confirmation through another look.

    PubMed

    Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W

    2004-10-01

    In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.

  15. Static Scene Statistical Non-Uniformity Correction

    DTIC Science & Technology

    2015-03-01

    Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion

  16. Effect of multizone refractive multifocal contact lenses on standard automated perimetry.

    PubMed

    Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa

    2012-09-01

    The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.

  17. Welding deviation detection algorithm based on extremum of molten pool image contour

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang

    2016-01-01

    The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.

  18. Standard Model and New physics for ɛ'k/ɛk

    NASA Astrophysics Data System (ADS)

    Kitahara, Teppei

    2018-05-01

    The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πvv) from the SM predictions, which can be probed precisely in the near future by NA62 and KOTO experiments. We present correlations between ɛ'k/ɛk and ℬ(K → πvv) in two types of NP scenarios: a box dominated scenario and a Z-penguin dominated one. It is shown that different correlations are predicted and the future precision measurements of K → πvv can distinguish both scenarios.

  19. Casimir squared correction to the standard rotator Hamiltonian for the O( n) sigma-model in the delta-regime

    NASA Astrophysics Data System (ADS)

    Niedermayer, F.; Weisz, P.

    2018-05-01

    In a previous paper we found that the isospin susceptibility of the O( n) sigma-model calculated in the standard rotator approximation differs from the next-to-next-to leading order chiral perturbation theory result in terms vanishing like 1 /ℓ, for ℓ = L t /L → ∞ and further showed that this deviation could be described by a correction to the rotator spectrum proportional to the square of the quadratic Casimir invariant. Here we confront this expectation with analytic nonperturbative results on the spectrum in 2 dimensions, by Balog and Hegedüs for n = 3 , 4 and by Gromov, Kazakov and Vieira for n = 4, and find good agreement in both cases. We also consider the case of 3 dimensions.

  20. Validation of XCO2 derived from SWIR spectra of GOSAT TANSO-FTS with aircraft measurement data

    NASA Astrophysics Data System (ADS)

    Inoue, M.; Morino, I.; Uchino, O.; Miyamoto, Y.; Yoshida, Y.; Yokota, T.; Machida, T.; Sawa, Y.; Matsueda, H.; Sweeney, C.; Tans, P. P.; Andrews, A. E.; Biraud, S. C.; Tanaka, T.; Kawakami, S.; Patra, P. K.

    2013-10-01

    Column-averaged dry air mole fractions of carbon dioxide (XCO2) retrieved from Greenhouse gases Observing SATellite (GOSAT) Short-Wavelength InfraRed (SWIR) observations were validated with aircraft measurements by the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the HIAPER Pole-to-Pole Observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. To calculate XCO2 based on aircraft measurements (aircraft-based XCO2), tower measurements and model outputs were used for additional information near the surface and above the tropopause, respectively. Before validation, we investigated the impacts of GOSAT SWIR column averaging kernels (CAKs) and the shape of a priori profiles on the aircraft-based XCO2 calculation. The differences between aircraft-based XCO2 with and without the application of GOSAT CAK were evaluated to be less than ±0.4 ppm at most, and less than ±0.1 ppm on average. Therefore, we concluded that the GOSAT CAK produces only a minor effect on the aircraft-based XCO2 calculation in terms of the overall uncertainty of GOSAT XCO2. We compared GOSAT data retrieved within ±2 or ±5° latitude/longitude boxes centered at each aircraft measurement site to aircraft-based data measured on a GOSAT overpass day. The results indicated that GOSAT XCO2 over land regions agreed with aircraft-based XCO2, except that the former is biased by -0.68 ppm (-0.99 ppm) with a standard deviation of 2.56 ppm (2.51 ppm), whereas the averages of the differences between the GOSAT XCO2 over ocean and the aircraft-based XCO2 were -1.82 ppm (-2.27 ppm) with a standard deviation of 1.04 ppm (1.79 ppm) for ±2° (±5°) boxes.

  1. Predicting Accommodative Response Using Paraxial Schematic Eye Models

    PubMed Central

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2016-01-01

    Purpose Prior ultrasound biomicroscopy (UBM) studies showed that accommodative optical response (AOR) can be predicted from accommodative biometric changes in a young and a pre-presbyopic population from linear relationships between accommodative optical and biometric changes, with a standard deviation of less than 0.55D. Here, paraxial schematic eyes (SE) were constructed from measured accommodative ocular biometry parameters to see if predictions are improved. Methods Measured ocular biometry (OCT, A-scan and UBM) parameters from 24 young and 24 pre-presbyopic subjects were used to construct paraxial SEs for each individual subject (individual SEs) for three different lens equivalent refractive index methods. Refraction and AOR calculated from the individual SEs were compared with Grand Seiko (GS) autorefractor measured refraction and AOR. Refraction and AOR were also calculated from individual SEs constructed using the average population accommodative change in UBM measured parameters (average SEs). Results Schematic eye calculated and GS measured AOR were linearly related (young subjects: slope = 0.77; r2 = 0.86; pre-presbyopic subjects: slope = 0.64; r2 = 0.55). The mean difference in AOR (GS - individual SEs) for the young subjects was −0.27D and for the pre-presbyopic subjects was 0.33D. For individual SEs, the mean ± SD of the absolute differences in AOR between the GS and SEs was 0.50 ± 0.39D for the young subjects and 0.50 ± 0.37D for the pre-presbyopic subjects. For average SEs, the mean ± SD of the absolute differences in AOR between the GS and the SEs was 0.77 ± 0.88D for the young subjects and 0.51 ± 0.49D for the pre-presbyopic subjects. Conclusions Individual paraxial SEs predict AOR, on average, with a standard deviation of 0.50D in young and pre-presbyopic subject populations. Although this prediction is only marginally better than from individual linear regressions, it does consider all the ocular biometric parameters. PMID:27092928

  2. Marsh collapse thresholds for coastal Louisiana estimated using elevation and vegetation index data

    USGS Publications Warehouse

    Couvillion, Brady R.; Beck, Holly

    2013-01-01

    Forecasting marsh collapse in coastal Louisiana as a result of changes in sea-level rise, subsidence, and accretion deficits necessitates an understanding of thresholds beyond which inundation stress impedes marsh survival. The variability in thresholds at which different marsh types cease to occur (i.e., marsh collapse) is not well understood. We utilized remotely sensed imagery, field data, and elevation data to help gain insight into the relationships between vegetation health and inundation. A Normalized Difference Vegetation Index (NDVI) dataset was calculated using remotely sensed data at peak biomass (August) and used as a proxy for vegetation health and productivity. Statistics were calculated for NDVI values by marsh type for intermediate, brackish, and saline marsh in coastal Louisiana. Marsh-type specific NDVI values of 1.5 and 2 standard deviations below the mean were used as upper and lower limits to identify conditions indicative of collapse. As marshes seldom occur beyond these values, they are believed to represent a range within which marsh collapse is likely to occur. Inundation depth was selected as the primary candidate for evaluation of marsh collapse thresholds. Elevation relative to mean water level (MWL) was calculated by subtracting MWL from an elevation dataset compiled from multiple data types including light detection and ranging (lidar) and bathymetry. A polynomial cubic regression was used to examine a random subset of pixels to determine the relationship between elevation (relative to MWL) and NDVI. The marsh collapse uncertainty range values were found by locating the intercept of the regression line with the 1.5 and 2 standard deviations below the mean NDVI value for each marsh type. Results indicate marsh collapse uncertainty ranges of 30.7–35.8 cm below MWL for intermediate marsh, 20–25.6 cm below MWL for brackish marsh, and 16.9–23.5 cm below MWL for saline marsh. These values are thought to represent the ranges of inundation depths within which marsh collapse is probable.

  3. Investigation of the refractive index repeatability for tantalum pentoxide coatings, prepared by physical vapor film deposition techniques.

    PubMed

    Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J

    2017-02-01

    Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.

  4. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    PubMed

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  5. Sample sizes needed for specified margins of relative error in the estimates of the repeatability and reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2005-01-01

    Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.

  6. Quantifying VOC emissions from polymers: A case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze, J.K.; Qasem, J.S.; Snoddy, R.

    1996-12-31

    Evaluating residual volatile organic compound emissions emanating from low-density polyethylene can pose significant challenges. These challenges include quantifying emissions from: (a) multiple process lines with different operating conditions; (b) several different comonomers; (c) variations of comonomer content in each grade; and (d) over 120 grades of LDPE. This presentation is a Case Study outlining a project to develop grade-specific emission data for low-density polyethylene pellets. This study included extensive laboratory analyses and required the development of a relational database to compile analytical results, calculate the mean concentration and standard deviation, and generate emissions reports.

  7. Impulse damping control of an experimental structure

    NASA Technical Reports Server (NTRS)

    Redmond, J.; Meyer, J. L.; Silverberg, L.

    1993-01-01

    The characteristics associated with the fuel optimal control of a harmonic oscillator are extended to develop a near minimum fuel control algorithm for the vibration suppression of spacecraft. The operation of single level thrusters is regulated by recursive calculations of the standard deviations of displacement and velocity resulting in a bang-off-bang controller. A vertically suspended 16 ft cantilevered beam was used in the experiment. Results show that the structure's response was easily manipulated by minor alterations in the control law and the control system performance was not seriously degraded in the presence of multiple actuator failures.

  8. Light-propagation management in coupled waveguide arrays: Quantitative experimental and theoretical assessment from band structures to functional patterns

    NASA Astrophysics Data System (ADS)

    Moison, Jean-Marie; Belabas, Nadia; Levenson, Juan Ariel; Minot, Christophe

    2012-09-01

    We assess the band structure of arrays of coupled optical waveguides both by ab initio calculations and by experiments, with an excellent quantitative agreement without any adjustable physical parameter. The band structures we obtain can deviate strongly from the expectations of the standard coupled mode theory approximation, but we describe them efficiently by a few parameters within an extended coupled mode theory. We also demonstrate that this description is in turn a firm and simple basis for accurate beam management in functional patterns of coupled waveguides, in full accordance with their design.

  9. Predicting Energy Consumption for Potential Effective Use in Hybrid Vehicle Powertrain Management Using Driver Prediction

    NASA Astrophysics Data System (ADS)

    Magnuson, Brian

    A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.

  10. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  11. Growth curves and the international standard: How children's growth reflects challenging conditions in rural Timor-Leste.

    PubMed

    Spencer, Phoebe R; Sanders, Katherine A; Judge, Debra S

    2018-02-01

    Population-specific growth references are important in understanding local growth variation, especially in developing countries where child growth is poor and the need for effective health interventions is high. In this article, we use mixed longitudinal data to calculate the first growth curves for rural East Timorese children to identify where, during development, deviation from the international standards occurs. Over an eight-year period, 1,245 children from two ecologically distinct rural areas of Timor-Leste were measured a total of 4,904 times. We compared growth to the World Health Organization (WHO) standards using z-scores, and modeled height and weight velocity using the SuperImposition by Translation And Rotation (SITAR) method. Using the Generalized Additive Model for Location, Scale and Shape (GAMLSS) method, we created the first growth curves for rural Timorese children for height, weight and body mass index (BMI). Relative to the WHO standards, children show early-life growth faltering, and stunting throughout childhood and adolescence. The median height and weight for this population tracks below the WHO fifth centile. Males have poorer growth than females in both z-BMI (p = .001) and z-height-for-age (p = .018) and, unlike females, continue to grow into adulthood. This is the most comprehensive investigation to date of rural Timorese children's growth, and the growth curves created may potentially be used to identify future secular trends in growth as the country develops. We show significant deviation from the international standard that becomes most pronounced at adolescence, similar to the growth of other Asian populations. Males and females show different growth responses to challenging conditions in this population. © 2017 Wiley Periodicals, Inc.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Bai, W

    Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less

  13. Odds per Adjusted Standard Deviation: Comparing Strengths of Associations for Risk Factors Measured on Different Scales and Across Diseases and Populations

    PubMed Central

    Hopper, John L.

    2015-01-01

    How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360

  14. Selection of vegetation indices for mapping the sugarcane condition around the oil and gas field of North West Java Basin, Indonesia

    NASA Astrophysics Data System (ADS)

    Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus

    2018-05-01

    Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.

  15. [A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].

    PubMed

    Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y

    2016-04-18

    To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.

  16. Heat flow in chains driven by thermal noise

    NASA Astrophysics Data System (ADS)

    Fogedby, Hans C.; Imparato, Alberto

    2012-04-01

    We consider the large deviation function for a classical harmonic chain composed of N particles driven at the end points by heat reservoirs, first derived in the quantum regime by Saito and Dhar (2007 Phys. Rev. Lett. 99 180601) and in the classical regime by Saito and Dhar (2011 Phys. Rev. E 83 041121) and Kundu et al (2011 J. Stat. Mech. P03007). Within a Langevin description we perform this calculation on the basis of a standard path integral calculation in Fourier space. The cumulant generating function yielding the large deviation function is given in terms of a transmission Green's function and is consistent with the fluctuation theorem. We find a simple expression for the tails of the heat distribution, which turns out to decay exponentially. We, moreover, consider an extension of a single-particle model suggested by Derrida and Brunet (2005 Einstein Aujourd'hui (Les Ulis: EDP Sciences)) and discuss the two-particle case. We also discuss the limit for large N and present a closed expression for the cumulant generating function. Finally, we present a derivation of the fluctuation theorem on the basis of a Fokker-Planck description. This result is not restricted to the harmonic case but is valid for a general interaction potential between the particles.

  17. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    NASA Astrophysics Data System (ADS)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  18. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies.

    PubMed

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S; Moses, William W; Qi, Jinyi

    2018-03-16

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1-1.3 over the TOF 500 ps and 1.5-1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  19. Automated EEG sleep staging in the term-age baby using a generative modelling approach.

    PubMed

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  20. Automated EEG sleep staging in the term-age baby using a generative modelling approach

    NASA Astrophysics Data System (ADS)

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  1. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S.; Moses, William W.; Qi, Jinyi

    2018-03-01

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1–1.3 over the TOF 500 ps and 1.5–1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  2. Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates

    NASA Astrophysics Data System (ADS)

    Rodionov, Andrey

    An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.

  3. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Chetty, I; Snyder, K

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL)more » tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  4. Collinearity in Least-Squares Analysis

    ERIC Educational Resources Information Center

    de Levie, Robert

    2012-01-01

    How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

  5. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  6. Estimating maize water stress by standard deviation of canopy temperature in thermal imagery

    USDA-ARS?s Scientific Manuscript database

    A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...

  7. Comparison of heart rate variability and pulse rate variability detected with photoplethysmography

    NASA Astrophysics Data System (ADS)

    Rauh, Robert; Limley, Robert; Bauer, Rainer-Dieter; Radespiel-Troger, Martin; Mueck-Weymann, Michael

    2004-08-01

    This study compares ear photoplethysmography (PPG) and electrocardiogram (ECG) in providing accurate heart beat intervals for use in calculations of heart rate variability (HRV, from ECG) or of pulse rate variability (PRV, from PPG) respectively. Simultaneous measurements were taken from 44 healthy subjects at rest during spontaneous breathing and during forced metronomic breathing (6/min). Under both conditions, highly significant (p > 0.001) correlations (1.0 > r > 0.97) were found between all evaluated common HRV and PRV parameters. However, under both conditions the PRV parameters were higher than HRV. In addition, we calculated the limits of agreement according to Bland and Altman between both techniques and found good agreement (< 10% difference) for heart rate and standard deviation of normal-to-normal intervals (SDNN), but only moderate (10-20%) or even insufficient (> 20%) agreement for other standard HRV and PRV parameters. Thus, PRV data seem to be acceptable for screening purposes but, at least at this state of knowledge, not for medical decision making. However, further studies are needed before more certain determination can be made.

  8. TRASYS form factor matrix normalization

    NASA Technical Reports Server (NTRS)

    Tsuyuki, Glenn T.

    1992-01-01

    A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.

  9. Neutrinoless ββ decay mediated by the exchange of light and heavy neutrinos: the role of nuclear structure correlations

    NASA Astrophysics Data System (ADS)

    Menéndez, J.

    2018-01-01

    Neutrinoless β β decay nuclear matrix elements calculated with the shell model and energy-density functional theory typically disagree by more than a factor of two in the standard scenario of light-neutrino exchange. In contrast, for a decay mediated by sterile heavy neutrinos the deviations are reduced to about 50%, an uncertainty similar to the one due to short-range effects. We compare matrix elements in the light- and heavy-neutrino-exchange channels, exploring the radial, momentum transfer and angular momentum-parity matrix element distributions, and considering transitions that involve correlated and uncorrelated nuclear states. We argue that the shorter-range heavy-neutrino exchange is less sensitive to collective nuclear correlations, and that discrepancies in matrix elements are mostly due to the treatment of long-range correlations in many-body calculations. Our analysis supports previous studies suggesting that isoscalar pairing correlations, which affect mostly the longer-range part of the neutrinoless β β decay operator, are partially responsible for the differences between nuclear matrix elements in the standard light-neutrino-exchange mechanism.

  10. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    PubMed

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R ² of 0.6066.

  11. Approximate first-principles anharmonic calculations of polyatomic spectra using MP2 and B3LYP potentials: comparisons with experiment.

    PubMed

    Roy, Tapta Kanchan; Carrington, Tucker; Gerber, R Benny

    2014-08-21

    Anharmonic vibrational spectroscopy calculations using MP2 and B3LYP computed potential surfaces are carried out for a series of molecules, and frequencies and intensities are compared with those from experiment. The vibrational self-consistent field with second-order perturbation correction (VSCF-PT2) is used in computing the spectra. The test calculations have been performed for the molecules HNO3, C2H4, C2H4O, H2SO4, CH3COOH, glycine, and alanine. Both MP2 and B3LYP give results in good accord with experimental frequencies, though, on the whole, MP2 gives very slightly better agreement. A statistical analysis of deviations in frequencies from experiment is carried out that gives interesting insights. The most probable percentage deviation from experimental frequencies is about -2% (to the red of the experiment) for B3LYP and +2% (to the blue of the experiment) for MP2. There is a higher probability for relatively large percentage deviations when B3LYP is used. The calculated intensities are also found to be in good accord with experiment, but the percentage deviations are much larger than those for frequencies. The results show that both MP2 and B3LYP potentials, used in VSCF-PT2 calculations, account well for anharmonic effects in the spectroscopy of molecules of the types considered.

  12. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    PubMed

    Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  13. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  14. SU-C-BRD-06: Results From a 5 Patient in Vivo Rectal Wall Dosimetry Study Using Plastic Scintillation Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, L; Kudchadker, R; Lee, A

    Purpose: To evaluate the performance characteristics of plastic scintillation detectors (PSDs) in an in vivo environment for external beam radiation, and to establish the usefulness and ease of implementation of a PSD based in vivo dosimetry system for routine clinical use. Methods: A five patient IRB approved in vivo dosimetry study was performed. Five patients with prostate cancer were enrolled and PSDs were used to monitor rectal wall dose and verify the delivered dose for approximately two fractions each week over the course of their treatment (approximately fourteen fractions), resulting in a total of 142 in vivo measurements. A setmore » of two PSDs was fabricated for each patient. At each monitored fraction the PSDs were attached to the anterior surface of an endorectal balloon used to immobilize the patient's prostate during treatment. A CT scan was acquired with a CTon- rails linear accelerator to localize the detectors and to calculate the dose expected to be delivered to the detectors. Each PSD acquired data in 10 second intervals for the duration of the treatment. The deviation between expected and measured cumulative dose was calculated for each detector for each fraction, and averaged over each patient and the patient population as a whole. Results: The average difference between expected dose and measured dose ranged from -3.3% to 3.3% for individual patients, with standard deviations between 5.6% and 7.1% for four of the patients. The average difference for the entire population was -0.4% with a standard deviation of 2.8%. The detectors were well tolerated by the patients and the system did not interrupt the clinical workflow. Conclusion: PSDs perform well as in vivo dosimeters, exhibiting good accuracy and precision. This, combined with the practicability of using such a system, positions the PSD as a strong candidate for clinical in vivo dosimetry in the future. This work supported in part by the National Cancer Institute through an R01 grant (CA120198-01A2) and by the American Legion Auxiliary through the American Auxiliary Fellowship in Cancer Research.« less

  15. YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuiver, M.; Deevey, E.S.

    1961-01-01

    Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less

  16. Inverse correlation between the standard deviation of R-R intervals in supine position and the simplified menopausal index in women with climacteric symptoms.

    PubMed

    Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio

    2014-06-01

    Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.

  17. Savant skills in autism: psychometric approaches and parental reports

    PubMed Central

    Howlin, Patricia; Goode, Susan; Hutton, Jane; Rutter, Michael

    2009-01-01

    Most investigations of savant skills in autism are based on individual case reports. The present study investigated rates and types of savant skills in 137 individuals with autism (mean age 24 years). Intellectual ability ranged from severe intellectual impairment to superior functioning. Savant skills were judged from parental reports and specified as ‘an outstanding skill/knowledge clearly above participant's general level of ability and above the population norm’. A comparable definition of exceptional cognitive skills was applied to Wechsler test scores—requiring a subtest score at least 1 standard deviation above general population norms and 2 standard deviations above the participant's own mean subtest score. Thirty-nine participants (28.5%) met criteria for either a savant skill or an exceptional cognitive skill: 15 for an outstanding cognitive skill (most commonly block design); 16 for a savant skill based on parental report (mostly mathematical/calculating abilities); 8 met criteria for both a cognitive and parental rated savant skill. One-third of males showed some form of outstanding ability compared with 19 per cent of females. No individual with a non-verbal IQ below 50 met criteria for a savant skill and, contrary to some earlier hypotheses, there was no indication that individuals with higher rates of stereotyped behaviours/interests were more likely to demonstrate savant skills. PMID:19528018

  18. Glycaemic variability in patients with severe sepsis or septic shock admitted to an Intensive Care Unit.

    PubMed

    Silveira, L M; Basile-Filho, A; Nicolini, E A; Dessotte, C A M; Aguiar, G C S; Stabile, A M

    2017-08-01

    Sepsis is associated with morbidity and mortality, which implies high costs to the global health system. Metabolic alterations that increase glycaemia and glycaemic variability occur during sepsis. To verify mean body glucose levels and glycaemic variability in Intensive Care Unit (ICU) patients with severe sepsis or septic shock. Retrospective and exploratory study that involved collection of patients' sociodemographic and clinical data and calculation of severity scores. Glycaemia measurements helped to determine glycaemic variability through standard deviation and mean amplitude of glycaemic excursions. Analysis of 116 medical charts and 6730 glycaemia measurements revealed that the majority of patients were male and aged over 60 years. Surgical treatment was the main reason for ICU admission. High blood pressure and diabetes mellitus were the most usual comorbidities. Patients that died during the ICU stay presented the highest SOFA scores and mean glycaemia; they also experienced more hypoglycaemia events. Patients with diabetes had higher mean glycaemia, evaluated through standard deviation and mean amplitude of glycaemia excursions. Organic impairment at ICU admission may underlie glycaemic variability and lead to a less favourable outcome. High glycaemic variability in patients with diabetes indicates that monitoring of these individuals is crucial to ensure better outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An epidemiologic study on anthropometric dimensions of 7-11-year-old Iranian children: considering ethnic differences.

    PubMed

    Mirmohammadi, Seyyed Jalil; Hafezi, Rahmatollah; Mehrparvar, Amir Houshang; Gerdfaramarzi, Raziyeh Soltani; Mostaghaci, Mehrdad; Nodoushan, Reza Jafari; Rezaeian, Bibiseyedeh

    2013-01-01

    Anthropometric data can be used to identify the physical dimensions of equipment, furniture, clothing and workstations. The use of poorly designed furniture that fails to fulfil the users' anthropometric dimensions, has a negative impact on human health. In this study, we measured some anthropometric dimensions of Iranian children from different ethnicities. A total of 12,731 Iranian primary school children aged 7-11 years were included in the study and their static anthropometric dimensions were measured. Descriptive statistics such as mean, standard deviation and key percentiles were calculated. All dimensions were compared among different ethnicities and different genders. This study showed significant differences in a set of 22 anthropometric dimensions with regard to gender, age and ethnicity. Turk boys and Arab girls were larger than their contemporaries in different ages. According to the results of this study, difference between genders and among different ethnicities should be taken into account by designers and manufacturers of school furniture. In this study, we measured 22 static anthropometric dimensions of 12,731 Iranian primary school children aged 7-11 years from different ethnicities. Descriptive statistics such as mean, standard deviation and key percentiles were measured for each dimension. This study showed significant differences in a set of 22 anthropometric dimensions in different genders, ages and ethnicities.

  20. Atmospherically deposited PBDEs, pesticides, PCBs, and PAHs in western U.S. National Park fish: Concentrations and consumption guidelines

    USGS Publications Warehouse

    Ackerman, L.K.; Schwindt, A.R.; Simonich, S.L.M.; Koch, D.C.; Blett, T.F.; Schreck, C.B.; Kent, M.L.; Landers, D.H.

    2008-01-01

    Concentrations of polybrominated diphenyl ethers (PBDEs), pesticides, polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons were measured in 136 fish from 14 remote lakes in 8 western U.S. National Parks/Preserves between 2003 and 2005 and compared to human and wildlife contaminant health thresholds. A sensitive (median detection limit, -18 pg/g wet weight), efficient (61% recovery at 8 ng/g), reproducible (4.1% relative standard deviation (RSD)), and accurate (7% deviation from standard reference material (SRM)) analytical method was developed and validated for these analyses. Concentrations of PCBs, hexachlorobenzene, hexachlorocyclohexanes, DDTs, and chlordanes in western U.S. fish were comparable to or lower than mountain fish recently collected from Europe, Canada, and Asia. Dieldrin and PBDE concentrations were higher than recent measurements in mountain fish and Pacific Ocean salmon. Concentrations of most contaminants in western U.S. fish were 1-6 orders of magnitude below calculated recreational fishing contaminant health thresholds. However, lake average contaminant concentrations in fish exceeded subsistence fishing cancer thresholds in 8 of 14 lakes and wildlife contaminant health thresholds for piscivorous birds in 1of 14 lakes. These results indicate that atmospherically deposited organic contaminants can accumulate in high elevation fish, reaching concentrations relevant to human and wildlife health. ?? 2008 American Chemical Society.

  1. Describing Peripancreatic Collections According to the Revised Atlanta Classification of Acute Pancreatitis: An International Interobserver Agreement Study.

    PubMed

    Bouwense, Stefan A; van Brunschot, Sandra; van Santvoort, Hjalmar C; Besselink, Marc G; Bollen, Thomas L; Bakker, Olaf J; Banks, Peter A; Boermeester, Marja A; Cappendijk, Vincent C; Carter, Ross; Charnley, Richard; van Eijck, Casper H; Freeny, Patrick C; Hermans, John J; Hough, David M; Johnson, Colin D; Laméris, Johan S; Lerch, Markus M; Mayerle, Julia; Mortele, Koenraad J; Sarr, Michael G; Stedman, Brian; Vege, Santhi Swaroop; Werner, Jens; Dijkgraaf, Marcel G; Gooszen, Hein G; Horvath, Karen D

    2017-08-01

    Severe acute pancreatitis is associated with peripancreatic morphologic changes as seen on imaging. Uniform communication regarding these morphologic findings is crucial for accurate diagnosis and treatment. For the original 1992 Atlanta classification, interobserver agreement is poor. We hypothesized that for the revised Atlanta classification, interobserver agreement will be better. An international, interobserver agreement study was performed among expert and nonexpert radiologists (n = 14), surgeons (n = 15), and gastroenterologists (n = 8). Representative computed tomographies of all stages of acute pancreatitis were selected from 55 patients and were assessed according to the revised Atlanta classification. The interobserver agreement was calculated among all reviewers and subgroups, that is, expert and nonexpert reviewers; interobserver agreement was defined as poor (≤0.20), fair (0.21-0.40), moderate (0.41-0.60), good (0.61-0.80), or very good (0.81-1.00). Interobserver agreement among all reviewers was good (0.75 [standard deviation, 0.21]) for describing the type of acute pancreatitis and good (0.62 [standard deviation, 0.19]) for the type of peripancreatic collection. Expert radiologists showed the best and nonexpert clinicians the lowest interobserver agreement. Interobserver agreement was good for the revised Atlanta classification, supporting the importance for widespread adaption of this revised classification for clinical and research communications.

  2. An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners

    PubMed Central

    O'Neill, Thomas A.

    2017-01-01

    Applications of interrater agreement (IRA) statistics for Likert scales are plentiful in research and practice. IRA may be implicated in job analysis, performance appraisal, panel interviews, and any other approach to gathering systematic observations. Any rating system involving subject-matter experts can also benefit from IRA as a measure of consensus. Further, IRA is fundamental to aggregation in multilevel research, which is becoming increasingly common in order to address nesting. Although, several technical descriptions of a few specific IRA statistics exist, this paper aims to provide a tractable orientation to common IRA indices to support application. The introductory overview is written with the intent of facilitating contrasts among IRA statistics by critically reviewing equations, interpretations, strengths, and weaknesses. Statistics considered include rwg, rwg*, r′wg, rwg(p), average deviation (AD), awg, standard deviation (Swg), and the coefficient of variation (CVwg). Equations support quick calculation and contrasting of different agreement indices. The article also includes a “quick reference” table and three figures in order to help readers identify how IRA statistics differ and how interpretations of IRA will depend strongly on the statistic employed. A brief consideration of recommended practices involving statistical and practical cutoff standards is presented, and conclusions are offered in light of the current literature. PMID:28553257

  3. Faraday dispersion functions of galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ideguchi, Shinsuke; Tashiro, Yuichi; Takahashi, Keitaro

    2014-09-01

    The Faraday dispersion function (FDF), which can be derived from an observed polarization spectrum by Faraday rotation measure synthesis, is a profile of polarized emissions as a function of Faraday depth. We study intrinsic FDFs along sight lines through face-on Milky Way like galaxies by means of a sophisticated galactic model incorporating three-dimensional MHD turbulence, and investigate how much information the FDF intrinsically contains. Since the FDF reflects distributions of thermal and cosmic-ray electrons as well as magnetic fields, it has been expected that the FDF could be a new probe to examine internal structures of galaxies. We, however, findmore » that an intrinsic FDF along a sight line through a galaxy is very complicated, depending significantly on actual configurations of turbulence. We perform 800 realizations of turbulence and find no universal shape of the FDF even if we fix the global parameters of the model. We calculate the probability distribution functions of the standard deviation, skewness, and kurtosis of FDFs and compare them for models with different global parameters. Our models predict that the presence of vertical magnetic fields and the large-scale height of cosmic-ray electrons tend to make the standard deviation relatively large. In contrast, the differences in skewness and kurtosis are relatively less significant.« less

  4. Costs of disposable material in the operating room do not show high correlation with surgical time: Implications for hospital payment.

    PubMed

    Delo, Caroline; Leclercq, Pol; Martins, Dimitri; Pirson, Magali

    2015-08-01

    The objectives of this study are to analyze the variation of the surgical time and of disposable costs per surgical procedure and to analyze the association between disposable costs and the surgical time. The registration of data was done in an operating room of a 419 bed general hospital, over a period of three months (n = 1556 surgical procedures). Disposable material per procedure used was recorded through a barcode scanning method. The average cost (standard deviation) of disposable material is €183.66 (€183.44). The mean surgical time (standard deviation) is 96 min (63). Results have shown that the homogeneity of operating time and DM costs was quite good per surgical procedure. The correlation between the surgical time and DM costs is not high (r = 0.65). In a context of Diagnosis Related Group (DRG) based hospital payment, it is important that costs information systems are able to precisely calculate costs per case. Our results show that the correlation between surgical time and costs of disposable materials is not good. Therefore, empirical data or itemized lists should be used instead of surgical time as a cost driver for the allocation of costs of disposable materials to patients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Final height in survivors of childhood cancer compared with Height Standard Deviation Scores at diagnosis.

    PubMed

    Knijnenburg, S L; Raemaekers, S; van den Berg, H; van Dijk, I W E M; Lieverst, J A; van der Pal, H J; Jaspers, M W M; Caron, H N; Kremer, L C; van Santen, H M

    2013-04-01

    Our study aimed to evaluate final height in a cohort of Dutch childhood cancer survivors (CCS) and assess possible determinants of final height, including height at diagnosis. We calculated standard deviation scores (SDS) for height at initial cancer diagnosis and height in adulthood in a cohort of 573 CCS. Multivariable regression analyses were carried out to estimate the influence of different determinants on height SDS at follow-up. Overall, survivors had a normal height SDS at cancer diagnosis. However, at follow-up in adulthood, 8.9% had a height ≤-2 SDS. Height SDS at diagnosis was an important determinant for adult height SDS. Children treated with (higher doses of) radiotherapy showed significantly reduced final height SDS. Survivors treated with total body irradiation (TBI) and craniospinal radiation had the greatest loss in height (-1.56 and -1.37 SDS, respectively). Younger age at diagnosis contributed negatively to final height. Height at diagnosis was an important determinant for height SDS at follow-up. Survivors treated with TBI, cranial and craniospinal irradiation should be monitored periodically for adequate linear growth, to enable treatment on time if necessary. For correct interpretation of treatment-related late effects studies in CCS, pre-treatment data should always be included.

  6. [Online endpoint detection algorithm for blending process of Chinese materia medica].

    PubMed

    Lin, Zhao-Zhou; Yang, Chan; Xu, Bing; Shi, Xin-Yuan; Zhang, Zhi-Qiang; Fu, Jing; Qiao, Yan-Jiang

    2017-03-01

    Blending process, which is an essential part of the pharmaceutical preparation, has a direct influence on the homogeneity and stability of solid dosage forms. With the official release of Guidance for Industry PAT, online process analysis techniques have been more and more reported in the applications in blending process, but the research on endpoint detection algorithm is still in the initial stage. By progressively increasing the window size of moving block standard deviation (MBSD), a novel endpoint detection algorithm was proposed to extend the plain MBSD from off-line scenario to online scenario and used to determine the endpoint in the blending process of Chinese medicine dispensing granules. By online learning of window size tuning, the status changes of the materials in blending process were reflected in the calculation of standard deviation in a real-time manner. The proposed method was separately tested in the blending processes of dextrin and three other extracts of traditional Chinese medicine. All of the results have shown that as compared with traditional MBSD method, the window size changes according to the proposed MBSD method (progressively increasing the window size) could more clearly reflect the status changes of the materials in blending process, so it is suitable for online application. Copyright© by the Chinese Pharmaceutical Association.

  7. Evaluation of a Pitot type spirometer in helium/oxygen mixtures.

    PubMed

    Søndergaard, S; Kárason, S; Lundin, S; Stenqvist, O

    1998-08-01

    Mixtures of helium and oxygen are regaining a place in the treatment of obstruction of the upper and lower respiratory tract. The parenchymal changes during the course of IRDS or ARDS may also benefit from the reintroduction of helium/oxygen. In order to monitor and document the effect of low-density gas mixtures, we evaluated the Datex AS/3 Side Stream Spirometry module with D-lite (Datex-Engstrom Instrumentarium Corporation, Finland) against two golden standards. Under conditions simulating controlled and spontaneous ventilation with gas mixtures of He (approx. 80, 50, and 20%)/O2 or N2(approx. 21 and 79%)/02, simultaneous measurements using Biotek Ventilator Tester (Bio-Tek Instr., Vermont, USA) or body plethysmograph (SensorMedics System, Anaheim, USA) were correlated with data from the spirometry module. Data were analyzed according to a statistical regression model resulting in a best-fit equation based on density, voltage, and volume measurements. As expected, the D-lite (a modified Pitot tube) showed density-dependent behaviour. Regression equations and percentage deviation of estimated versus measured values were calculated. Measurements with the D-lite using low-density gases are satisfactorily contained in best-fit equations with a standard deviation of less than 5% during all ventilatory modes and mixtures.

  8. Water Level Monitoring on Tibetan Lakes Based on Icesat and Envisat Data Series

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Qiao, G.; Wu, Y. J.; Cao, Y. J.; Mi, H.

    2017-09-01

    Satellite altimetry technique is an effective method to monitor the water level of lakes in a wide range, especially in sparsely populated areas, such as the Tibet Plateau (TP). To provide high quality data for time-series change detection of lake water level, an automatic and efficient algorithm for lake water footprint (LWF) detection in a wide range is used. Based on ICESat GLA14 Release634 data and ENVISat GDR 1Hz data, water level of 167 lakes were obtained from ICESat data series, and water level of 120 lakes were obtained from ENVISat data series. Among them, 67 lakes contained two data series. Mean standard deviation of all lakes is 0.088 meters (ICESat), 0.339 meters (ENVISat). Combination of multi-source altimetry data is helpful for us to get longer and more dense periods cover water level, study the lake level changes, manage water resources and understand the impacts of climate change better. In addition, the standard deviation of LWF elevation used to calculate the water level were analyzed by month. Based on lake data set for the TP from the 1960s, 2005, and 2014 in Scientific Data, it is found that the water level changes in the TP have a strong spatial correlation with the area changes.

  9. Probing optical band gaps at the nanoscale in NiFe₂O₄ and CoFe₂O₄ epitaxial films by high resolution electron energy loss spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dileep, K.; Loukya, B.; Datta, R., E-mail: ranjan@jncasr.ac.in

    2014-09-14

    Nanoscale optical band gap variations in epitaxial thin films of two different spinel ferrites, i.e., NiFe₂O₄ (NFO) and CoFe₂O₄ (CFO), have been investigated by spatially resolved high resolution electron energy loss spectroscopy. Experimentally, both NFO and CFO show indirect/direct band gaps around 1.52 eV/2.74 and 2.3 eV, and 1.3 eV/2.31 eV, respectively, for the ideal inverse spinel configuration with considerable standard deviation in the band gap values for CFO due to various levels of deviation from the ideal inverse spinel structure. Direct probing of the regions in both the systems with tetrahedral A site cation vacancy, which is distinct frommore » the ideal inverse spinel configuration, shows significantly smaller band gap values. The experimental results are supported by the density functional theory based modified Becke-Johnson exchange correlation potential calculated band gap values for the different cation configurations.« less

  10. Prospects for Higgs coupling measurements in SUSY with radiatively-driven naturalness

    NASA Astrophysics Data System (ADS)

    Bae, Kyu Jung; Baer, Howard; Nagata, Natsumi; Serce, Hasan

    2015-08-01

    In the post-LHC8 world—where a Standard Model-like Higgs boson has been established but there is no sign of supersymmetry (SUSY)—the detailed profiling of the Higgs boson properties has emerged as an important road towards discovery of new physics. We present calculations of the expected deviations in Higgs boson couplings κτ ,b, κt, κW ,Z, κg and κγ versus the naturalness measure ΔEW . Low values of ΔEW˜10 - 30 give rise to a natural little hierarchy characterized by light Higgsinos with a mass of μ ˜mZ while top squarks are highly mixed but lie in the several TeV range. For such models with radiatively driven naturalness, one expects the Higgs boson h to look very SM-like although deviations can occur. The more promising road to SUSY discovery requires direct Higgsino pair production at a high energy e+e- collider operating with the center-of-mass energy √{s }>2 μ ˜√{2 ΔEW }mZ.

  11. Reproducibility of a Standardized Actigraphy Scoring Algorithm for Sleep in a US Hispanic/Latino Population

    PubMed Central

    Patel, Sanjay R.; Weng, Jia; Rueschman, Michael; Dudley, Katherine A.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R.; Reid, Kathryn; Seiger, Ashley N.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui

    2015-01-01

    Study Objectives: While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Design: Observational study. Setting: Community-based. Participants: A random sample of 50 adults aged 18–64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. Interventions: N/A. Measurements and Results: Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. Conclusions: With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. Citation: Patel SR, Weng J, Rueschman M, Dudley KA, Loredo JS, Mossavar-Rahmani Y, Ramirez M, Ramos AR, Reid K, Seiger AN, Sotres-Alvarez D, Zee PC, Wang R. Reproducibility of a standardized actigraphy scoring algorithm for sleep in a US Hispanic/Latino population. SLEEP 2015;38(9):1497–1503. PMID:25845697

  12. Radiometric calibration and SNR calculation of a SWIR imaging telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Ozgur; Turk, Fethi; Selimoglu, Ozgur

    2012-09-06

    Radiometric calibration of an imaging telescope is usually made using a uniform illumination sphere in a laboratory. In this study, we used the open-sky images taken during bright day conditions to calibrate our telescope. We found a dark signal offset value and a linear response coefficient value for each pixel by using three different algorithms. Then we applied these coefficients to the taken images, and considerably lowered the image non-uniformity. Calibration can be repeated during the operation of telescope with an object that has better uniformity than open-sky. Also SNR (Signal to Noise Ratio) of each pixel was calculated frommore » the open-sky images using the temporal mean and standard deviations. It is found that SNR is greater than 80 for all pixels even at low light levels.« less

  13. Stability and charge separation of different CH3NH3SnI3/TiO2 interface: A first-principles study

    NASA Astrophysics Data System (ADS)

    Yang, Zhenzhen; Wang, Yuanxu; Liu, Yunyan

    2018-05-01

    Interface has an important effect on charge separation of perovskite solar cells. Using first-principles calculations, we studied several different interfaces between CH3NH3SnI3 and TiO2. The interfacial structure and electronic structure of these interfaces are thoroughly explored. We found that the SnI2/anatase (SnI2/A) system is more stable than the other three systems, because an anatase surface can make Snsbnd I bond faster restore to the pristine value than a rutile surface, and SnI2/A system has a smaller standard deviation. The calculated plane-averaged electrostatic potential and the density of states suggest that SnI2/anatase interface has a better separation of photo-generated electron-hole pairs.

  14. Simultaneous Determination of Multiple Ginsenosides in Panax ginseng Herbal Medicines with One Single Reference Standard.

    PubMed

    Wu, Chunwei; Guan, Qingxiao; Wang, Shumei; Rong, Yueying

    2017-01-01

    Root of Panax ginseng C. A. Mey (Renseng in Chinese) is a famous Traditional Chinese Medicine. Ginsenosides are the major bioactive components. However, the shortage and high cost of some ginsenoside reference standards make it is difficult for quality control of P. ginseng . A method, single standard for determination of multicomponents (SSDMC), was developed for the simultaneous determination of nine ginsenosides in P. ginseng (ginsenoside Rg 1 , Re, Rf, Rg 2 , Rb 1 , Rc, Rb 2 , Rb 3 , Rd). The analytes were separated on Inertsil ODS-3 C18 (250 mm × 4.6 mm, 5 μm) with gradient elution of acetonitrile and water. The flow rate was 1 mL/min and detection wavelength was set at 203 nm. The feasibility and accuracy of SSDMC were checked by the external standard method, and various high-performance liquid chromatographic (HPLC) instruments and chromatographic conditions were investigated to verify its applicability. Using ginsenoside Rg 1 as the internal reference substance, the contents of other eight ginsenosides were calculated according to conversion factors (F) by HPLC. The method was validated with linearity ( r 2 ≥ 0.9990), precision (relative standard deviation [RSD] ≤2.9%), accuracy (97.5%-100.8%, RSD ≤ 1.6%), repeatability, and stability. There was no significant difference between the SSDMC method and the external standard method. New SSDMC method could be considered as an ideal mean to analyze the components for which reference standards are not readily available. A method, single standard for determination of multicomponents (SSDMC), was established by high-performance liquid chromatography for the simultaneous determination of nine ginsenosides in Panax ginseng (ginsenoside Rg1, Re, Rf, Rg2, Rb1, Rc, Rb2, Rb3, Rd)Various chromatographic conditions were investigated to verify applicability of FsThe feasibility and accuracy of SSDMC were checked by the external standard method. Abbreviations used: DRT: Different value of retention time; F: Conversion factor; HPLC: High-performance Liquid Chromatography; LOD: Limit of detection; LOQ: Limit of quantitation; PD: Percent difference; PPD: 20(S)-protopanaxadiol; PPT: 20(S)-protopanaxatriol; RSD: Relative standard deviation; SSDMC: Single Standard for Determination of Multicomponents; TCM: Traditional Chinese Medicine.

  15. Soiling of building envelope surfaces and its effect on solar reflectance – Part III: Interlaboratory study of an accelerated aging method for roofing materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.

    A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less

  16. Motion-robust intensity-modulated proton therapy for distal esophageal cancer.

    PubMed

    Yu, Jen; Zhang, Xiaodong; Liao, Li; Li, Heng; Zhu, Ronald; Park, Peter C; Sahoo, Narayan; Gillin, Michael; Li, Yupeng; Chang, Joe Y; Komaki, Ritsuko; Lin, Steven H

    2016-03-01

    To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT). This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT DCT0 andDCT50 , the 4D composite dose, and the 4D dynamic dose for a single fraction. The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50 . The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction. In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.

  17. Evaluation of image quality metrics for the prediction of subjective best focus.

    PubMed

    Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S

    2010-03-01

    Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.

  18. Poster — Thur Eve — 11: Validation of the orthopedic metallic artifact reduction tool for CT simulations at the Ottawa Hospital Cancer Centre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutherland, J; Foottit, C

    Metallic implants in patients can produce image artifacts in kilovoltage CT simulation images which can introduce noise and inaccuracies in CT number, affecting anatomical segmentation and dose distributions. The commercial orthopedic metal artifact reduction algorithm (O-MAR) (Philips Healthcare System) was recently made available on CT simulation scanners at our institution. This study validated the clinical use of O-MAR by investigating its effects on CT number and dose distributions. O-MAR corrected and uncorrected images were acquired with a Philips Brilliance Big Bore CT simulator of a cylindrical solid water phantom that contained various plugs (including metal) of known density. CT numbermore » accuracy was investigated by determining the mean and standard deviation in regions of interest (ROI) within each plug for uncorrected and O-MAR corrected images and comparing with no-metal image values. Dose distributions were calculated using the Monaco treatment planning system. Seven open fields were equally spaced about the phantom around a ROI near the center of the phantom. These were compared to a “correct” dose distribution calculated by overriding electron densities a no-metal phantom image to produce an image containing metal but no artifacts. An overall improvement in CT number and dose distribution accuracy was achieved by applying the O-MAR correction. Mean CT numbers and standard deviations were found to be generally improved. Exceptions included lung equivalent media, which is consistent with vendor specified contraindications. Dose profiles were found to vary by ±4% between uncorrected or O-MAR corrected images with O-MAR producing doses closer to ground truth.« less

  19. High-Resolution Study of the First Stretching Overtones of H3Si79Br.

    PubMed

    Ceausu; Graner; Bürger; Mkadmi; Pracna; Lafferty

    1998-11-01

    The Fourier transform infrared spectrum of monoisotopic H3Si79Br (resolution 7.7 x 10(-3) cm-1) was studied from 4200 to 4520 cm-1, in the region of the first overtones of the Si-H stretching vibration. The investigation of the spectrum revealed the presence of two band systems, the first consisting of one parallel (nu0 = 4340.2002 cm-1) and one perpendicular (nu0 = 4342.1432 cm-1) strong component, and the second of one parallel (nu0 = 4405.789 cm-1) and one perpendicular (nu0 = 4416.233 cm-1) weak component. The rovibrational analysis shows strong local perturbations for both strong and weak systems. Seven hundred eighty-one nonzero-weighted transitions belonging to the strong system [the (200) manifold in the local mode picture] were fitted to a simple model involving a perpendicular component interacting by a weak Coriolis resonance with a parallel component. The most severely perturbed transitions (whose ||obs-calc || values exceeded 3 x 10(-3) cm-1) were given zero weights. The standard deviations of the fit were 1.0 x 10(-3) and 0.69 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. The weak band system, severely perturbed by many "dark" perturbers, was fitted to a model involving one parallel and one perpendicular band, connected by a Coriolis-type resonance. The K" . DeltaK = +10 to +18 subbands of the perpendicular component, which showed very high observed - calculated values ( approximately 0.5 cm-1), were excluded from this calculation. The standard deviations of the fit were 11 x 10(-3) and 13 x 10(-3) cm-1 for the parallel and the perpendicular components, respectively. Copyright 1998 Academic Press.

  20. Semi-automated and automated glioma grading using dynamic susceptibility-weighted contrast-enhanced perfusion MRI relative cerebral blood volume measurements.

    PubMed

    Friedman, S N; Bambrough, P J; Kotsarini, C; Khandanpour, N; Hoggard, N

    2012-12-01

    Despite the established role of MRI in the diagnosis of brain tumours, histopathological assessment remains the clinically used technique, especially for the glioma group. Relative cerebral blood volume (rCBV) is a dynamic susceptibility-weighted contrast-enhanced perfusion MRI parameter that has been shown to correlate to tumour grade, but assessment requires a specialist and is time consuming. We developed analysis software to determine glioma gradings from perfusion rCBV scans in a manner that is quick, easy and does not require a specialist operator. MRI perfusion data from 47 patients with different histopathological grades of glioma were analysed with custom-designed software. Semi-automated analysis was performed with a specialist and non-specialist operator separately determining the maximum rCBV value corresponding to the tumour. Automated histogram analysis was performed by calculating the mean, standard deviation, median, mode, skewness and kurtosis of rCBV values. All values were compared with the histopathologically assessed tumour grade. A strong correlation between specialist and non-specialist observer measurements was found. Significantly different values were obtained between tumour grades using both semi-automated and automated techniques, consistent with previous results. The raw (unnormalised) data single-pixel maximum rCBV semi-automated analysis value had the strongest correlation with glioma grade. Standard deviation of the raw data had the strongest correlation of the automated analysis. Semi-automated calculation of raw maximum rCBV value was the best indicator of tumour grade and does not require a specialist operator. Both semi-automated and automated MRI perfusion techniques provide viable non-invasive alternatives to biopsy for glioma tumour grading.

  1. Early Childhood Stress and Child Age Predict Longitudinal Increases in Obesogenic Eating Among Low-Income Children.

    PubMed

    Miller, Alison L; Gearhardt, Ashley N; Retzloff, Lauren; Sturza, Julie; Kaciroti, Niko; Lumeng, Julie C

    2018-01-31

    To identify whether psychosocial stress exposure during early childhood predicts subsequent increased eating in the absence of hunger (EAH), emotional overeating, food responsiveness, and enjoyment of food. This was an observational longitudinal study. Among 207 low-income children (54.6% non-Hispanic white, 46.9% girls), early childhood stress exposure was measured by parent report and a stress exposure index calculated, with higher scores indicating more stress exposure. Eating behaviors were measured in early (mean, 4.3; standard deviation, 0.5 years) and middle (mean, 7.9; standard deviation, 0.7 years) childhood. Observed EAH was assessed by measuring kilocalories of palatable food the child consumed after a meal. Parents reported on child eating behaviors on the Child Eating Behavior Questionnaire. Child weight and height were measured and body mass index z score (BMIz) calculated. Multivariable linear regression, adjusting for child sex, race/ethnicity, and BMIz, was used to examine the association of stress exposure with rate of change per year in each child eating behavior. Early childhood stress exposure predicted yearly increases in EAH (β = 0.14; 95% confidence interval, 0.002, 0.27) and Emotional Overeating (β = 0.14; 95% confidence interval, 0.008, 0.27). Stress exposure was not associated with Food Responsiveness (trend for decreased Enjoyment of Food; β = -0.13; 95% confidence interval, 0.002, -0.26). All child obesogenic eating behaviors increased with age (P < .05). Early stress exposure predicted increases in child eating behaviors known to associate with overweight/obesity. Psychosocial stress may confer overweight/obesity risk through eating behavior pathways. Targeting eating behaviors may be an important prevention strategy for children exposed to stress. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  2. Calibrated Color and Albedo Maps of Mercury

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Lucey, P. G.

    1996-03-01

    In order to determine the albedo and color of the mercurian surface, we are completing calibrated mosaics of Mariner 10 image data. A set of clear filter mosaics is being compiled in such a way as to maximize the signal-to-noise-ratio of the data and to allow for a quantitative measure of the precision of the data on a pixel-by-pixel basis. Three major imaging sequences of Mercury were acquired by Mariner 10: incoming first encounter (centered at 20S, 2E), outgoing first encounter (centered at 20N, 175E), and southern hemisphere second encounter (centered at 40S, 100E). For each sequence we are making separate mosaics for each camera (A and B) in order to have independent measurements. For each mosaic, regions of overlap from frame-to-frame are being averaged and the attendant standard deviations are being calculated. Due to the highly redundant nature of the data, each pixel in each mosaic will be an average calculated from 1-10 images. Each mosaic will have a corresponding standard deviation and n (number of measurements) map. A final mosaic will be created by averaging the six independent mosaics. This procedure lessens the effects of random noise and calibration residuals. From these data an albedo map will be produced using an improved photometric function for the Moon. A similar procedure is being followed for the lower resolution color sequences (ultraviolet, blue, orange, ultraviolet polarized). These data will be calibrated to absolute units through comparison of Mariner 10 images acquired of the Moon and Jupiter. Spectral interpretation of these new color and albedo maps will be presented with an emphasis on comparison with the Moon.

  3. Metabolic parameters linked by phenotype microarray to acid resistance profiles of poultry-associated Salmonella enterica.

    PubMed

    Guard, Jean; Rothrock, Michael J; Shah, Devendra H; Jones, Deana R; Gast, Richard K; Sanchez-Ingunza, Roxana; Madsen, Melissa; El-Attrache, John; Lungu, Bwalya

    Phenotype microarrays were analyzed for 51 datasets derived from Salmonella enterica. The top 4 serotypes associated with poultry products and one associated with turkey, respectively Typhimurium, Enteritidis, Heidelberg, Infantis and Senftenberg, were represented. Datasets were partitioned initially into two clusters based on ranking by values at pH 4.5 (PM10 A03). Negative control wells were used to establish 90 respiratory units as the point differentiating acid resistance from sensitive strains. Thus, 24 isolates that appeared most acid-resistant were compared initially to 27 that appeared most acid-sensitive (24 × 27 format). Paired cluster analysis was also done and it included the 7 most acid-resistant and -sensitive datasets (7 × 7 format). Statistical analyses of ranked data were then calculated in order of standard deviation, probability value by the Student's t-test and a measure of the magnitude of difference called effect size. Data were reported as significant if, by order of filtering, the following parameters were calculated: i) a standard deviation of 24 respiratory units or greater from all datasets for each chemical, ii) a probability value of less than or equal to 0.03 between clusters and iii) an effect size of at least 0.50 or greater between clusters. Results suggest that between 7.89% and 23.16% of 950 chemicals differentiated acid-resistant isolates from sensitive ones, depending on the format applied. Differences were more evident at the extremes of phenotype using the subset of data in the paired 7 × 7 format. Results thus provide a strategy for selecting compounds for additional research, which may impede the emergence of acid-resistant Salmonella enterica in food. Published by Elsevier Masson SAS.

  4. Evidence for repetitive load in the trapezius muscle during a tapping task.

    PubMed

    Tomatis, L; Müller, C; Nakaseko, M; Läubli, T

    2012-08-01

    Many studies describe the trapezius muscle activation pattern during repetitive key-tapping focusing on continuous activation. The objectives of this study were to determine whether the upper trapezius is phasically active during supported key tapping, whether this activity is cross-correlated with forearm muscle activity, and whether trapezius activity depends on key characteristic. Thirteen subjects (29.7 ± 11.4 years) were tested. Surface EMG of the finger's extensor and flexor and of the trapezius muscles, as well as the key on-off signal was recorded while the subject performed a 2-min session of key tapping at 4 Hz. The linear envelopes obtained were cut into single tapping cycles extending from one onset to the next onset signal and subsequently time-normalized. Effect size between mean range and maximal standard deviation was calculated to determine as to whether a burst of trapezius muscle activation was present. Cross-correlation was used to determine the time-lag of the activity bursts between forearm and trapezius muscles. For each person the mean and standard deviation of the cross-correlations coefficient between forearm muscles and trapezius were determined. Results showed a burst of activation in the trapezius muscle during most of the tapping cycles. The calculated effect size was ≥0.5 in 67% of the cases. Cross-correlation factors between forearm and trapezius muscle activity were between 0.75 and 0.98 for both extensor and flexor muscles. The cross-correlated phasic trapezius activity did not depend on key characteristics. Trapezius muscle was dynamically active during key tapping; its activity was clearly correlated with forearm muscles' activity.

  5. The impact of inter-fraction dose variations on biological equivalent dose (BED): the concept of equivalent constant dose.

    PubMed

    Zavgorodni, S

    2004-12-07

    Inter-fraction dose fluctuations, which appear as a result of setup errors, organ motion and treatment machine output variations, may influence the radiobiological effect of the treatment even when the total delivered physical dose remains constant. The effect of these inter-fraction dose fluctuations on the biological effective dose (BED) has been investigated. Analytical expressions for the BED accounting for the dose fluctuations have been derived. The concept of biological effective constant dose (BECD) has been introduced. The equivalent constant dose (ECD), representing the constant physical dose that provides the same cell survival fraction as the fluctuating dose, has also been introduced. The dose fluctuations with Gaussian as well as exponential probability density functions were investigated. The values of BECD and ECD calculated analytically were compared with those derived from Monte Carlo modelling. The agreement between Monte Carlo modelled and analytical values was excellent (within 1%) for a range of dose standard deviations (0-100% of the dose) and the number of fractions (2 to 37) used in the comparison. The ECDs have also been calculated for conventional radiotherapy fields. The analytical expression for the BECD shows that BECD increases linearly with the variance of the dose. The effect is relatively small, and in the flat regions of the field it results in less than 1% increase of ECD. In the penumbra region of the 6 MV single radiotherapy beam the ECD exceeded the physical dose by up to 35%, when the standard deviation of combined patient setup/organ motion uncertainty was 5 mm. Equivalently, the ECD field was approximately 2 mm wider than the physical dose field. The difference between ECD and the physical dose is greater for normal tissues than for tumours.

  6. On the Skill of Balancing While Riding a Bicycle

    PubMed Central

    Cain, Stephen M.; Ashton-Miller, James A.; Perkins, Noel C.

    2016-01-01

    Humans have ridden bicycles for over 200 years, yet there are no continuous measures of how skill differs between novice and expert. To address this knowledge gap, we measured the dynamics of human bicycle riding in 14 subjects, half of whom were skilled and half were novice. Each subject rode an instrumented bicycle on training rollers at speeds ranging from 1 to 7 m/s. Steer angle and rate, steer torque, bicycle speed, and bicycle roll angle and rate were measured and steering power calculated. A force platform beneath the roller assembly measured the net force and moment that the bicycle, rider and rollers exerted on the floor, enabling calculations of the lateral positions of the system centers of mass and pressure. Balance performance was quantified by cross-correlating the lateral positions of the centers of mass and pressure. The results show that all riders exhibited similar balance performance at the slowest speed. However at higher speeds, the skilled riders achieved superior balance performance by employing more rider lean control (quantified by cross-correlating rider lean angle and bicycle roll angle) and less steer control (quantified by cross-correlating steer rate and bicycle roll rate) than did novice riders. Skilled riders also used smaller steering control input with less variation (measured by average positive steering power and standard deviations of steer angle and rate) and less rider lean angle variation (measured by the standard deviation of the rider lean angle) independent of speed. We conclude that the reduction in balance control input by skilled riders is not due to reduced balance demands but rather to more effective use of lean control to guide the center of mass via center of pressure movements. PMID:26910774

  7. Investigation of imaging properties for submillimeter rectangular pinholes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Dan, E-mail: dxia@uchicago.edu; Moore, Stephen C., E-mail: scmoore@bwh.harvard.edu, E-mail: miaepark@bwh.harvard.edu, E-mail: mcervo@bwh.harvard.edu; Park, Mi-Ae, E-mail: scmoore@bwh.harvard.edu, E-mail: miaepark@bwh.harvard.edu, E-mail: mcervo@bwh.harvard.edu

    Purpose: Recently, a multipinhole collimator with inserts that have both rectangular apertures and rectangular fields of view (FOVs) has been proposed for SPECT imaging since it can tile the projection onto the detector efficiently and the FOVs in transverse and axial directions become separable. The purpose of this study is to investigate the image properties of rectangular-aperture pinholes with submillimeter apertures sizes. Methods: In this work, the authors have conducted sensitivity and FOV experiments for 18 replicates of a prototype insert fabricated in platinum/iridium (Pt/Ir) alloy with submillimeter square-apertures. A sin{sup q}θ fit to the experimental sensitivity has been performedmore » for these inserts. For the FOV measurement, the authors have proposed a new formula to calculate the projection intensity of a flood image on the detector, taking into account the penumbra effect. By fitting this formula to the measured projection data, the authors obtained the acceptance angles. Results: The mean (standard deviation) of fitted sensitivity exponents q and effective edge lengths w{sub e} were, respectively, 10.8 (1.8) and 0.38 mm (0.02 mm), which were close to the values, 7.84 and 0.396 mm, obtained from Monte Carlo calculations using the parameters of the designed inserts. For the FOV measurement, the mean (standard deviation) of the transverse and axial acceptances were 35.0° (1.2°) and 30.5° (1.6°), which are in good agreement with the designed values (34.3° and 29.9°). Conclusions: These results showed that the physical properties of the fabricated inserts with submillimeter aperture size matched our design well.« less

  8. The use of linear expressions of solute boiling point versus retention to indicate special interactions with the molecular rings of modified cyclodextrin phases in gas chromatography

    PubMed

    Betts

    2000-08-01

    The boiling points (degrees C, 1 x 10) of diverse C10 polar solutes from volatile oils are set against their relative retention times versus n-undecane to calculate linear equations for 12 commercial modified cyclodextrin (CD) capillary phases. Ten data points are considered for each CD, then solutes are rejected until 5 or more remain that give an expression with a correlation coefficient of at least 0.990 and a standard deviation of less than 5.5. Three phases give almost perfect correlation, and 3 other CDs have difficulty complying. Solutes involved in the equations (most frequently cuminal, linalol, and carvone) are presumed to have a 'standard' polar transient interaction with the molecular rings of the CDs concerned. Several remaining solutes (mostly citral, fenchone, and menthol) exhibit extra retention over the calculated standard (up to 772%), which is believed to indicate a firm 'host' CD or 'guest' solute molecular fit in some cases. Other solutes show less retention than calculated (mostly citronellal, citronellol, estragole, and pulegone). This suggests rejection by the CD, which behaves merely as a conventional stationary phase to them. The intercept constant in the equation for each phase is suggested to be a numerical relative polarity indicator. These b values indicate that 3 hydroxypropyl CDs show the most polarity with values from 28 to 43; and CDs that are fully substituted with inert groups fall in the range of 15 to 20.

  9. Selection and Classification Using a Forecast Applicant Pool.

    ERIC Educational Resources Information Center

    Hendrix, William H.

    The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…

  10. Test of the principle of operation of a wideband magnetic direction finder for lightning return strokes

    NASA Technical Reports Server (NTRS)

    Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.

    1976-01-01

    The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.

  11. Wavelength selection method with standard deviation: application to pulse oximetry.

    PubMed

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  12. How random is a random vector?

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  13. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.

    PubMed

    Hasija, Narender; Bala, Madhu; Goyal, Virender

    2014-05-01

    Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.

  14. Association of auricular pressing and heart rate variability in pre-exam anxiety students.

    PubMed

    Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong

    2013-03-25

    A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.

  15. Association of auricular pressing and heart rate variability in pre-exam anxiety students

    PubMed Central

    Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong

    2013-01-01

    A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734

  16. Efficacy of the Amsler Grid Test in Evaluating Glaucomatous Central Visual Field Defects.

    PubMed

    Su, Daniel; Greenberg, Andrew; Simonson, Joseph L; Teng, Christopher C; Liebmann, Jeffrey M; Ritch, Robert; Park, Sung Chul

    2016-04-01

    To investigate the efficacy of the Amsler grid test in detecting central visual field (VF) defects in glaucoma. Prospective, cross-sectional study. Patients with glaucoma with reliable Humphrey 10-2 Swedish Interactive Threshold Algorithm standard VF on the date of enrollment or within the previous 3 months. Amsler grid tests were performed for each eye and were considered "abnormal" if there was any perceived scotoma with missing or blurry grid lines within the central 10 degrees ("Amsler grid scotoma"). An abnormal 10-2 VF was defined as ≥3 adjacent points at P < 0.01 with at least 1 point at P < 0.005 in the same hemifield on the pattern deviation plot. Sensitivity, specificity, and positive and negative predictive values of the Amsler grid scotoma area were calculated with the 10-2 VF as the clinical reference standard. Among eyes with an abnormal 10-2 VF, regression analyses were performed between the Amsler grid scotoma area and the 10-2 VF parameters (mean deviation [MD], scotoma extent [number of test points with P < 0.01 in total deviation map] and scotoma mean depth [mean sensitivity of test points with P < 0.01 in total deviation map]). Sensitivity, specificity, and positive and negative predictive values of the Amsler grid scotoma area. A total of 106 eyes (53 patients) were included (mean ± standard deviation age, 24-2 MD and 10-2 MD = 66±12 years, -9.61±8.64 decibels [dB] and -9.75±9.00 dB, respectively). Sensitivity, specificity, and positive and negative predictive values of the Amsler grid test were 68%, 92%, 97%, and 46%, respectively. Sensitivity was 40% in eyes with 10-2 MD better than -6 dB, 58% in eyes with 10-2 MD between -12 and -6 dB, and 92% in eyes with 10-2 MD worse than -12 dB. The area under the receiver operating characteristic curve of the Amsler grid scotoma area was 0.810 (95% confidence interval, 0.723-0.880, P < 0.001). The Amsler grid scotoma area had the strongest relationship with 10-2 MD (quadratic R(2)=0.681), followed by 10-2 scotoma extent (quadratic R(2)=0.611) and 10-2 scotoma mean depth (quadratic R(2)=0.299) (all P < 0.001). The Amsler grid can be used to screen for moderate to severe central vision loss from glaucoma. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  17. Comparison of beam position calculation methods for application in digital acquisition systems

    NASA Astrophysics Data System (ADS)

    Reiter, A.; Singh, R.

    2018-05-01

    Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.

  18. Tools for Basic Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  19. Uncertainty analysis of absorbed dose calculations from thermoluminescence dosimeters.

    PubMed

    Kirby, T H; Hanson, W F; Johnston, D A

    1992-01-01

    Thermoluminescence dosimeters (TLD) are widely used to verify absorbed doses delivered from radiation therapy beams. Specifically, they are used by the Radiological Physics Center for mailed dosimetry for verification of therapy machine output. The effects of the random experimental uncertainties of various factors on dose calculations from TLD signals are examined, including: fading, dose response nonlinearity, and energy response corrections; reproducibility of TL signal measurements and TLD reader calibration. Individual uncertainties are combined to estimate the total uncertainty due to random fluctuations. The Radiological Physics Center's (RPC) mail out TLD system, utilizing throwaway LiF powder to monitor high-energy photon and electron beam outputs, is analyzed in detail. The technique may also be applicable to other TLD systems. It is shown that statements of +/- 2% dose uncertainty and +/- 5% action criterion for TLD dosimetry are reasonable when related to uncertainties in the dose calculations, provided the standard deviation (s.d.) of TL readings is 1.5% or better.

  20. Sample size calculation in economic evaluations.

    PubMed

    Al, M J; van Hout, B A; Michel, B C; Rutten, F F

    1998-06-01

    A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.

  1. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  2. Measurement of the mass energy-absorption coefficient of air for x-rays in the range from 3 to 60 keV.

    PubMed

    Buhr, H; Büermann, L; Gerlach, M; Krumrey, M; Rabus, H

    2012-12-21

    For the first time the absolute photon mass energy-absorption coefficient of air in the energy range of 10 to 60 keV has been measured with relative standard uncertainties below 1%, considerably smaller than those of up to 2% assumed for calculated data. For monochromatized synchrotron radiation from the electron storage ring BESSY II both the radiant power and the fraction of power deposited in dry air were measured using a cryogenic electrical substitution radiometer and a free air ionization chamber, respectively. The measured absorption coefficients were compared with state-of-the art calculations and showed an average deviation of 2% from calculations by Seltzer. However, they agree within 1% with data calculated earlier by Hubbell. In the course of this work, an improvement of the data analysis of a previous experimental determination of the mass energy-absorption coefficient of air in the range of 3 to 10 keV was found to be possible and corrected values of this preceding study are given.

  3. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Gyekenyesi, John P.

    1988-01-01

    The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  4. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  5. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  6. Herschel Extreme Lensing Line Observations: Dynamics of Two Strongly Lensed Star-Forming Galaxies near Redshift z=2*

    NASA Technical Reports Server (NTRS)

    Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne; hide

    2014-01-01

    We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.

  7. Developing a phenomenological model of the proton trajectory within a heterogeneous medium required for proton imaging.

    PubMed

    Fekete, Charles-Antoine Collins; Doolan, Paul; Dias, Marta F; Beaulieu, Luc; Seco, Joao

    2015-07-07

    To develop an accurate phenomenological model of the cubic spline path estimate of the proton path, accounting for the initial proton energy and water equivalent thickness (WET) traversed. Monte Carlo (MC) simulations were used to calculate the path of protons crossing various WET (10-30 cm) of different material (LN300, water and CB2-50% CaCO3) for a range of initial energies (180-330 MeV). For each MC trajectory, cubic spline trajectories (CST) were constructed based on the entrance and exit information of the protons and compared with the MC using the root mean square (RMS) metric. The CST path is dependent on the direction vector magnitudes (|P0,1|). First, |P0,1| is set to the proton path length (with factor Λ(Norm)(0,1) = 1.0). Then, two optimal factor Λ(0,1) are introduced in |P0,1|. The factors are varied to minimize the RMS difference with MC paths for every configuration. A set of Λ(opt)(0,1) factors, function of WET/water equivalent path length (WEPL), that minimizes the RMS are presented. MTF analysis is then performed on proton radiographs of a line-pair phantom reconstructed using the CST trajectories. Λ(opt)(0,1) was fitted to the WET/WEPL ratio using a quadratic function (Y = A + BX(2) where A = 1.01,0.99, B = 0.43,-  0.46 respectively for Λ(opt)(0), Λ(opt)(1)). The RMS deviation calculated along the path, between the CST and the MC, increases with the WET. The increase is larger when using Λ(Norm)(0,1) than Λ(opt)(0,1) (difference of 5.0% with WET/WEPL = 0.66). For 230/330 MeV protons, the MTF10% was found to increase by 40/16% respectively for a thin phantom (15 cm) when using the Λ(opt)(0,1) model compared to the Λ(Norm)(0,1) model. Calculation times for Λ(opt)(0,1) are scaled down compared to MLP and RMS deviation are similar within standard deviation.B ased on the results of this study, using CST with the Λ(opt)(0,1) factors reduces the RMS deviation and increases the spatial resolution when reconstructing proton trajectories.

  8. Associations between environmental factors and hospital admissions for sickle cell disease

    PubMed Central

    Piel, Frédéric B.; Tewari, Sanjay; Brousse, Valentine; Analitis, Antonis; Font, Anna; Menzel, Stephan; Chakravorty, Subarna; Thein, Swee Lay; Inusa, Baba; Telfer, Paul; de Montalembert, Mariane; Fuller, Gary W.; Katsouyanni, Klea; Rees, David C.

    2017-01-01

    Sickle cell disease is an increasing global health burden. This inherited disease is characterized by a remarkable phenotypic heterogeneity, which can only partly be explained by genetic factors. Environmental factors are likely to play an important role but studies of their impact on disease severity are limited and their results are often inconsistent. This study investigated associations between a range of environmental factors and hospital admissions of young patients with sickle cell disease in London and in Paris between 2008 and 2012. Specific analyses were conducted for subgroups of patients with different genotypes and for the main reasons for admissions. Generalized additive models and distributed lag non-linear models were used to assess the magnitude of the associations and to calculate relative risks. Some environmental factors significantly influence the numbers of hospital admissions of children with sickle cell disease, although the associations identified are complicated. Our study suggests that meteorological factors are more likely to be associated with hospital admissions for sickle cell disease than air pollutants. It confirms previous reports of risks associated with wind speed (risk ratio: 1.06/standard deviation; 95% confidence interval: 1.00–1.12) and also with rainfall (1.06/standard deviation; 95% confidence interval: 1.01–1.12). Maximum atmospheric pressure was found to be a protective factor (0.93/standard deviation; 95% confidence interval: 0.88–0.99). Weak or no associations were found with temperature. Divergent associations were identified for different genotypes or reasons for admissions, which could partly explain the lack of consistency in earlier studies. Advice to patients with sickle cell disease usually includes avoiding a range of environmental conditions that are believed to trigger acute complications, including extreme temperatures and high altitudes. Scientific evidence to support such advice is limited and sometimes confusing. This study shows that environmental factors do explain some of the variations in rates of admission to hospital with acute symptoms in sickle cell disease, but the associations are complex, and likely to be specific to different environments and the individual’s exposure to them. Furthermore, this study highlights the need for prospective studies with large numbers of patients and standardized protocols across Europe. PMID:27909222

  9. Associations between environmental factors and hospital admissions for sickle cell disease.

    PubMed

    Piel, Frédéric B; Tewari, Sanjay; Brousse, Valentine; Analitis, Antonis; Font, Anna; Menzel, Stephan; Chakravorty, Subarna; Thein, Swee Lay; Inusa, Baba; Telfer, Paul; de Montalembert, Mariane; Fuller, Gary W; Katsouyanni, Klea; Rees, David C

    2017-04-01

    Sickle cell disease is an increasing global health burden. This inherited disease is characterized by a remarkable phenotypic heterogeneity, which can only partly be explained by genetic factors. Environmental factors are likely to play an important role but studies of their impact on disease severity are limited and their results are often inconsistent. This study investigated associations between a range of environmental factors and hospital admissions of young patients with sickle cell disease in London and in Paris between 2008 and 2012. Specific analyses were conducted for subgroups of patients with different genotypes and for the main reasons for admissions. Generalized additive models and distributed lag non-linear models were used to assess the magnitude of the associations and to calculate relative risks. Some environmental factors significantly influence the numbers of hospital admissions of children with sickle cell disease, although the associations identified are complicated. Our study suggests that meteorological factors are more likely to be associated with hospital admissions for sickle cell disease than air pollutants. It confirms previous reports of risks associated with wind speed (risk ratio: 1.06/standard deviation; 95% confidence interval: 1.00-1.12) and also with rainfall (1.06/standard deviation; 95% confidence interval: 1.01-1.12). Maximum atmospheric pressure was found to be a protective factor (0.93/standard deviation; 95% confidence interval: 0.88-0.99). Weak or no associations were found with temperature. Divergent associations were identified for different genotypes or reasons for admissions, which could partly explain the lack of consistency in earlier studies. Advice to patients with sickle cell disease usually includes avoiding a range of environmental conditions that are believed to trigger acute complications, including extreme temperatures and high altitudes. Scientific evidence to support such advice is limited and sometimes confusing. This study shows that environmental factors do explain some of the variations in rates of admission to hospital with acute symptoms in sickle cell disease, but the associations are complex, and likely to be specific to different environments and the individual's exposure to them. Furthermore, this study highlights the need for prospective studies with large numbers of patients and standardized protocols across Europe. Copyright© Ferrata Storti Foundation.

  10. Inter-laboratory Comparison of Three Earplug Fit-test Systems

    PubMed Central

    Byrne, David C.; Murphy, William J.; Krieg, Edward F.; Ghent, Robert M.; Michael, Kevin L.; Stefanson, Earl W.; Ahroon, William A.

    2017-01-01

    The National Institute for Occupational Safety and Health (NIOSH) sponsored tests of three earplug fit-test systems (NIOSH HPD Well-Fit™, Michael & Associates FitCheck, and Honeywell Safety Products VeriPRO®). Each system was compared to laboratory-based real-ear attenuation at threshold (REAT) measurements in a sound field according to ANSI/ASA S12.6-2008 at the NIOSH, Honeywell Safety Products, and Michael & Associates testing laboratories. An identical study was conducted independently at the U.S. Army Aeromedical Research Laboratory (USAARL), which provided their data for inclusion in this report. The Howard Leight Airsoft premolded earplug was tested with twenty subjects at each of the four participating laboratories. The occluded fit of the earplug was maintained during testing with a soundfield-based laboratory REAT system as well as all three headphone-based fit-test systems. The Michael & Associates lab had highest average A-weighted attenuations and smallest standard deviations. The NIOSH lab had the lowest average attenuations and the largest standard deviations. Differences in octave-band attenuations between each fit-test system and the American National Standards Institute (ANSI) sound field method were calculated (Attenfit-test - AttenANSI). A-weighted attenuations measured with FitCheck and HPD Well-Fit systems demonstrated approximately ±2 dB agreement with the ANSI sound field method, but A-weighted attenuations measured with the VeriPRO system underestimated the ANSI laboratory attenuations. For each of the fit-test systems, the average A-weighted attenuation across the four laboratories was not significantly greater than the average of the ANSI sound field method. Standard deviations for residual attenuation differences were about ±2 dB for FitCheck and HPD Well-Fit compared to ±4 dB for VeriPRO. Individual labs exhibited a range of agreement from less than a dB to as much as 9.4 dB difference with ANSI and REAT estimates. Factors such as the experience of study participants and test administrators, and the fit-test psychometric tasks are suggested as possible contributors to the observed results. PMID:27786602

  11. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  12. 40 CFR 63.7751 - What reports must I submit and when?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... deviations from any emissions limitations (including operating limit), work practice standards, or operation and maintenance requirements, a statement that there were no deviations from the emissions limitations...-of-control during the reporting period. (7) For each deviation from an emissions limitation...

  13. CODATA recommended values of the fundamental constants

    NASA Astrophysics Data System (ADS)

    Mohr, Peter J.; Taylor, Barry N.

    2000-11-01

    A review is given of the latest Committee on Data for Science and Technology (CODATA) adjustment of the values of the fundamental constants. The new set of constants, referred to as the 1998 values, replaces the values recommended for international use by CODATA in 1986. The values of the constants, and particularly the Rydberg constant, are of relevance to the calculation of precise atomic spectra. The standard uncertainty (estimated standard deviation) of the new recommended value of the Rydberg constant, which is based on precision frequency metrology and a detailed analysis of the theory, is approximately 1/160 times the uncertainty of the 1986 value. The new set of recommended values as well as a searchable bibliographic database that gives citations to the relevant literature is available on the World Wide Web at physics.nist.gov/constants and physics.nist.gov/constantsbib, respectively. .

  14. Hydrologic response to modeled snowmelt input in alpine catchments in the Southwestern United States

    NASA Astrophysics Data System (ADS)

    Driscoll, J. M.; Molotch, N. P.; Jepsen, S. M.; Meixner, T.; Williams, M. W.; Sickman, J. O.

    2012-12-01

    Snowmelt from high elevation catchments is the primary source of water resources in the Southwestern United States. Timing and duration of snowmelt and resulting catchment response can show the physical and chemical importance of storage at the catchment scale. Storage of waters in subsurface materials provides a physical and chemical buffer to hydrologic input variability. We expect the hydrochemistry of catchments with less storage capacity will more closely reflect input waters than a catchment with more storage and therefore more geochemical evolution of waters. Two headwater catchments were compared for this study; Emerald Lake Watershed (ELW) in the southern Sierra Nevada and Green Lake 4 (GL4) in the Colorado Front Range. These sites have geochemically similar granitic terrane, and negligible evaporation and transpiration due to their high-elevation setting. Eleven years of data (1996-2006) from spatially-distributed snowmelt models were spatially and temporally aggregated to generate daily values of snowmelt volume for each catchment area. Daily storage flux was calculated as the difference between snowmelt input and catchment outflow at a daily timestep, normalized to the catchment area. Daily snowmelt values in GL4 are more consistent (the annual standard deviation ranged from 0.19 to 0.76 cm) than the daily snowmelt in ELW (0.60 to 1.04 cm). Outflow follows the same trend, with an even narrower range of standard deviations from GL4 (0.27 to 0.54 cm) compared to the standard deviation of outflow in ELW (0.38 to 0.98 cm). The dampening of the input variability could be due to storage in the catchment; the larger effect would mean a larger storage capacity in the catchment. Calculations of storage flux (the input snowmelt minus the output catchment discharge) show the annual sum of water into storage in ELW ranges from -0.9200 to 1.1124 meters, in GL4 the ranger is narrower, from -0.655 to 0.0992 meters. Cumulative storage for each year can be negative (more water leaving the system than entering; storage loss) or positive (more water coming into the system than leaving; storage gain). The cumulative storage for all years in GL4 show a similar positive trend from day of year 60 through 150, followed by a decrease to the end of the snowmelt season. Only two years (1997 and 2005) in GL4 were calculated to cumulatively gain storage water, the other nine years lost stored water to outflow. The cumulative storage annual data in ELW do not show as strong of a trend for all years. ELW also a different distribution of cumulative storage values; with four years showing a cumulative loss and seven years showing a gain in stored water. This could show a depletion of stored water, an underestimate of snowmelt or a connection to deeper flowpaths. Mass-balance inverse geochemical models will be used to determine the hydrochemical connectivity or lack of connectivity of snowmelt to outflow relative to the physical calculations. Initial hydrochemical results show generally higher concentrations of solutes from GL4 outflow, which may show more contribution from stored waters.

  15. Sample size, power calculations, and their implications for the cost of thorough studies of drug induced QT interval prolongation.

    PubMed

    Malik, Marek; Hnatkova, Katerina; Batchvarov, Velislav; Gang, Yi; Smetana, Peter; Camm, A John

    2004-12-01

    Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).

  16. Optimization of a middle atmosphere diagnostic scheme

    NASA Astrophysics Data System (ADS)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  17. Quantitative assessment of 12-lead ECG synthesis using CAVIAR.

    PubMed

    Scherer, J A; Rubel, P; Fayn, J; Willems, J L

    1992-01-01

    The objective of this study is to assess the performance of patient-specific segment-specific (PSSS) synthesis in QRST complexes using CAVIAR, a new method of the serial comparison for electrocardiograms and vectorcardiograms. A collection of 250 multi-lead recordings from the Common Standards for Quantitative Electrocardiography (CSE) diagnostic pilot study is employed. QRS and ST-T segments are independently synthesized using the PSSS algorithm so that the mean-squared error between the original and estimated waveforms is minimized. CAVIAR compares the recorded and synthesized QRS and ST-T segments and calculates the mean-quadratic deviation as a measure of error. The results of this study indicate that estimated QRS complexes are good representatives of their recorded counterparts, and the integrity of the spatial information is maintained by the PSSS synthesis process. Analysis of the ST-T segments suggests that the deviations between recorded and synthesized waveforms are considerably greater than those associated with the QRS complexes. The poorer performance of the ST-T segments is attributed to magnitude normalization of the spatial loops, low-voltage passages, and noise interference. Using the mean-quadratic deviation and CAVIAR as methods of performance assessment, this study indicates that the PSSS-synthesis algorithm accurately maintains the signal information within the 12-lead electrocardiogram.

  18. Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.

    PubMed

    Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos

    To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.

  19. Characterization of cardiac quiescence from retrospective cardiac computed tomography using a correlation-based phase-to-phase deviation measure

    PubMed Central

    Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.; Auffermann, William F.; Henry, Travis S.; Khosa, Faisal; Coy, Adam M.; Tridandapani, Srini

    2015-01-01

    Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as well as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (PAGG) and IVS (PIV S) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (PCT). The one exception was the RCA, which improved for PAGG for 18 of the 20 subjects when compared to PCT (PCT = 2.48; PAGG = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:25652511

  20. Characterization of cardiac quiescence from retrospective cardiac computed tomography using a correlation-based phase-to-phase deviation measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wick, Carson A.; McClellan, James H.; Arepalli, Chesnal D.

    2015-02-15

    Purpose: Accurate knowledge of cardiac quiescence is crucial to the performance of many cardiac imaging modalities, including computed tomography coronary angiography (CTCA). To accurately quantify quiescence, a method for detecting the quiescent periods of the heart from retrospective cardiac computed tomography (CT) using a correlation-based, phase-to-phase deviation measure was developed. Methods: Retrospective cardiac CT data were obtained from 20 patients (11 male, 9 female, 33–74 yr) and the left main, left anterior descending, left circumflex, right coronary artery (RCA), and interventricular septum (IVS) were segmented for each phase using a semiautomated technique. Cardiac motion of individual coronary vessels as wellmore » as the IVS was calculated using phase-to-phase deviation. As an easily identifiable feature, the IVS was analyzed to assess how well it predicts vessel quiescence. Finally, the diagnostic quality of the reconstructed volumes from the quiescent phases determined using the deviation measure from the vessels in aggregate and the IVS was compared to that from quiescent phases calculated by the CT scanner. Three board-certified radiologists, fellowship-trained in cardiothoracic imaging, graded the diagnostic quality of the reconstructions using a Likert response format: 1 = excellent, 2 = good, 3 = adequate, 4 = nondiagnostic. Results: Systolic and diastolic quiescent periods were identified for each subject from the vessel motion calculated using the phase-to-phase deviation measure. The motion of the IVS was found to be similar to the aggregate vessel (AGG) motion. The diagnostic quality of the coronary vessels for the quiescent phases calculated from the aggregate vessel (P{sub AGG}) and IVS (P{sub IV} {sub S}) deviation signal using the proposed methods was comparable to the quiescent phases calculated by the CT scanner (P{sub CT}). The one exception was the RCA, which improved for P{sub AGG} for 18 of the 20 subjects when compared to P{sub CT} (P{sub CT} = 2.48; P{sub AGG} = 2.07, p = 0.001). Conclusions: A method for quantifying the motion of specific coronary vessels using a correlation-based, phase-to-phase deviation measure was developed and tested on 20 patients receiving cardiac CT exams. The IVS was found to be a suitable predictor of vessel quiescence. The diagnostic quality of the quiescent phases detected by the proposed methods was comparable to those calculated by the CT scanner. The ability to quantify coronary vessel quiescence from the motion of the IVS can be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality.« less

Top