Sample records for standard deviation average

  1. N2/O2/H2 Dual-Pump Cars: Validation Experiments

    NASA Technical Reports Server (NTRS)

    OByrne, S.; Danehy, P. M.; Cutler, A. D.

    2003-01-01

    The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.

  2. A Visual Model for the Variance and Standard Deviation

    ERIC Educational Resources Information Center

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  3. Flexner 3.0-Democratization of Medical Knowledge for the 21st Century: Teaching Medical Science Using K-12 General Pathology as a Gateway Course.

    PubMed

    Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J

    2016-01-01

    A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."

  4. Flexner 3.0—Democratization of Medical Knowledge for the 21st Century

    PubMed Central

    Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.

    2016-01-01

    A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762

  5. Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data

    NASA Astrophysics Data System (ADS)

    Shulenin, V. P.

    2016-10-01

    Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.

  6. Blood pressure variability in man: its relation to high blood pressure, age and baroreflex sensitivity.

    PubMed

    Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A

    1980-12-01

    1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.

  7. Modeling the Zeeman effect in high altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.

    2015-10-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  8. Modeling the Zeeman effect in high-altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.

    2016-03-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  9. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  10. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  11. The average direct current offset values for small digital audio recorders in an acoustically consistent environment.

    PubMed

    Koenig, Bruce E; Lacey, Douglas S

    2014-07-01

    In this research project, nine small digital audio recorders were tested using five sets of 30-min recordings at all available recording modes, with consistent audio material, identical source and microphone locations, and identical acoustic environments. The averaged direct current (DC) offset values and standard deviations were measured for 30-sec and 1-, 2-, 3-, 6-, 10-, 15-, and 30-min segments. The research found an inverse association between segment lengths and the standard deviation values and that lengths beyond 30 min may not meaningfully reduce the standard deviation values. This research supports previous studies indicating that measured averaged DC offsets should only be used for exclusionary purposes in authenticity analyses and exhibit consistent values when the general acoustic environment and microphone/recorder configurations were held constant. Measured average DC offset values from exemplar recorders may not be directly comparable to those of submitted digital audio recordings without exactly duplicating the acoustic environment and microphone/recorder configurations. © 2014 American Academy of Forensic Sciences.

  12. Estimating active layer thickness and volumetric water content from ground penetrating radar measurements in Barrow, Alaska

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.

    Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less

  13. Estimating active layer thickness and volumetric water content from ground penetrating radar measurements in Barrow, Alaska

    DOE PAGES

    Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.; ...

    2018-01-09

    Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less

  14. Determining the Equation of State (EoS) Parameters for Ballistic Gelatin

    DTIC Science & Technology

    2015-09-01

    standard deviation. The specific heat measured at room temperature reported in (Winter 1975) is approximately 1.13 cal/g/°C (= 4.73 J /g/K). Fig. 4...Piatt 2010) Table 3 Specific heat capacity, average heat capacity, and standard deviation Temperature (°C) Cp [ J /(g·K)] Cp Cp Cp Average Cp...density amorphous ice and their implications on pressure induced amorphization. J Chem Physics. 2005;122:124710. Appleby-Thomas GJ, Hazell PJ

  15. Characterization of solar cells for space applications. Volume 5: Electrical characteristics of OCLI 225-micron MLAR wraparound cells as a function of intensity, temperature, and irradiation

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.

    1979-01-01

    Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.

  16. Mass balance, meteorology, area altitude distribution, glacier-surface altitude, ice motion, terminus position, and runoff at Gulkana Glacier, Alaska, 1996 balance year

    USGS Publications Warehouse

    March, Rod S.

    2003-01-01

    The 1996 measured winter snow, maximum winter snow, net, and annual balances in the Gulkana Glacier Basin were evaluated on the basis of meteorological, hydrological, and glaciological data. Averaged over the glacier, the measured winter snow balance was 0.87 meter on April 18, 1996, 1.1 standard deviation below the long-term average; the maximum winter snow balance, 1.06 meters, was reached on May 28, 1996; and the net balance (from August 30, 1995, to August 24, 1996) was -0.53 meter, 0.53 standard deviation below the long-term average. The annual balance (October 1, 1995, to September 30, 1996) was -0.37 meter. Area-averaged balances were reported using both the 1967 and 1993 area altitude distributions (the numbers previously given in this abstract use the 1993 area altitude distribution). Net balance was about 25 percent less negative using the 1993 area altitude distribution than the 1967 distribution. Annual average air temperature was 0.9 degree Celsius warmer than that recorded with the analog sensor used since 1966. Total precipitation catch for the year was 0.78 meter, 0.8 standard deviations below normal. The annual average wind speed was 3.5 meters per second in the first year of measuring wind speed. Annual runoff averaged 1.50 meters over the basin, 1.0 standard deviation below the long-term average. Glacier-surface altitude and ice-motion changes measured at three index sites document seasonal ice-speed and glacier-thickness changes. Both showed a continuation of a slowing and thinning trend present in the 1990s. The glacier terminus and lower ablation area were defined for 1996 with a handheld Global Positioning System survey of 126 locations spread out over about 4 kilometers on the lower glacier margin. From 1949 to 1996, the terminus retreated about 1,650 meters for an average retreat rate of 35 meters per year.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazelaar, Colien, E-mail: c.hazelaar@vumc.nl; Dahele, Max; Mostafavi, Hassan

    Purpose: Spine stereotactic body radiation therapy (SBRT) requires highly accurate positioning. We report our experience with markerless template matching and triangulation of kilovoltage images routinely acquired during spine SBRT, to determine spine position. Methods and Materials: Kilovoltage images, continuously acquired at 7, 11 or 15 frames/s during volumetric modulated spine SBRT of 18 patients, consisting of 93 fluoroscopy datasets (1 dataset/arc), were analyzed off-line. Four patients were immobilized in a head/neck mask, 14 had no immobilization. Two-dimensional (2D) templates were created for each gantry angle from planning computed tomography data and registered to prefiltered kilovoltage images to determine 2D shiftsmore » between actual and planned spine position. Registrations were considered valid if the normalized cross correlation score was ≥0.15. Multiple registrations were triangulated to determine 3D position. For each spine position dataset, average positional offset and standard deviation were calculated. To verify the accuracy and precision of the technique, mean positional offset and standard deviation for twenty stationary phantom datasets with different baseline shifts were measured. Results: For the phantom, average standard deviations were 0.18 mm for left-right (LR), 0.17 mm for superior-inferior (SI), and 0.23 mm for the anterior-posterior (AP) direction. Maximum difference in average detected and applied shift was 0.09 mm. For the 93 clinical datasets, the percentage of valid matched frames was, on average, 90.7% (range: 49.9-96.1%) per dataset. Average standard deviations for all datasets were 0.28, 0.19, and 0.28 mm for LR, SI, and AP, respectively. Spine position offsets were, on average, −0.05 (range: −1.58 to 2.18), −0.04 (range: −3.56 to 0.82), and −0.03 mm (range: −1.16 to 1.51), respectively. Average positional deviation was <1 mm in all directions in 92% of the arcs. Conclusions: Template matching and triangulation using kilovoltage images acquired during irradiation allows spine position detection with submillimeter accuracy at subsecond intervals. Although the majority of patients were not immobilized, most vertebrae were stable at the sub-mm level during spine SBRT delivery.« less

  18. Comparison of patient-specific instruments with standard surgical instruments in determining glenoid component position: a randomized prospective clinical trial.

    PubMed

    Hendel, Michael D; Bryan, Jason A; Barsoum, Wael K; Rodriguez, Eric J; Brems, John J; Evans, Peter J; Iannotti, Joseph P

    2012-12-05

    Glenoid component malposition for anatomic shoulder replacement may result in complications. The purpose of this study was to define the efficacy of a new surgical method to place the glenoid component. Thirty-one patients were randomized for glenoid component placement with use of either novel three-dimensional computed tomographic scan planning software combined with patient-specific instrumentation (the glenoid positioning system group), or conventional computed tomographic scan, preoperative planning, and surgical technique, utilizing instruments provided by the implant manufacturer (the standard surgical group). The desired position of the component was determined preoperatively. Postoperatively, a computed tomographic scan was used to define and compare the actual implant location with the preoperative plan. In the standard surgical group, the average preoperative glenoid retroversion was -11.3° (range, -39° to 17°). In the glenoid positioning system group, the average glenoid retroversion was -14.8° (range, -27° to 7°). When the standard surgical group was compared with the glenoid positioning system group, patient-specific instrumentation technology significantly decreased (p < 0.05) the average deviation of implant position for inclination and medial-lateral offset. Overall, the average deviation in version was 6.9° in the standard surgical group and 4.3° in the glenoid positioning system group. The average deviation in inclination was 11.6° in the standard surgical group and 2.9° in the glenoid positioning system group. The greatest benefit of patient-specific instrumentation was observed in patients with retroversion in excess of 16°; the average deviation was 10° in the standard surgical group and 1.2° in the glenoid positioning system group (p < 0.001). Preoperative planning and patient-specific instrumentation use resulted in a significant improvement in the selection and use of the optimal type of implant and a significant reduction in the frequency of malpositioned glenoid implants. Novel three-dimensional preoperative planning, coupled with patient and implant-specific instrumentation, allows the surgeon to better define the preoperative pathology, select the optimal implant design and location, and then accurately execute the plan at the time of surgery.

  19. Falling Behind: New Evidence on the Black-White Achievement Gap

    ERIC Educational Resources Information Center

    Levitt, Steven D.; Fryer, Roland G.

    2004-01-01

    On average, black students typically score one standard deviation below white students on standardized tests--roughly the difference in performance between the average 4th grader and the average 8th grader. Historically, what has come to be known as the black-white test-score gap has emerged before children enter kindergarten and has tended to…

  20. Decomposition Analyses Applied to a Complex Ultradian Biorhythm: The Oscillating NADH Oxidase Activity of Plasma Membranes Having a Potential Time-Keeping (Clock) Function

    PubMed Central

    Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James

    2003-01-01

    Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112

  1. Associations between Changes in City and Address Specific Temperature and QT Interval - The VA Normative Aging Study

    PubMed Central

    Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel

    2014-01-01

    Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150

  2. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-01

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.

  3. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites: Observed cloud variability at ARM sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-17

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less

  4. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  5. Fidelity deviation in quantum teleportation

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir

    2018-04-01

    We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.

  6. Family structure and childhood anthropometry in Saint Paul, Minnesota in 1918

    PubMed Central

    Warren, John Robert

    2017-01-01

    Concern with childhood nutrition prompted numerous surveys of children’s growth in the United States after 1870. The Children’s Bureau’s 1918 “Weighing and Measuring Test” measured two million children to produce the first official American growth norms. Individual data for 14,000 children survives from the Saint Paul, Minnesota survey whose stature closely approximated national norms. As well as anthropometry the survey recorded exact ages, street address and full name. These variables allow linkage to the 1920 census to obtain demographic and socioeconomic information. We matched 72% of children to census families creating a sample of nearly 10,000 children. Children in the entire survey (linked set) averaged 0.74 (0.72) standard deviations below modern WHO height-for-age standards, and 0.48 (0.46) standard deviations below modern weight-for-age norms. Sibship size strongly influenced height-for-age, and had weaker influence on weight-for-age. Each additional child six or underreduced height-for-age scores by 0.07 standard deviations (95% CI: −0.03, 0.11). Teenage siblings had little effect on height-forage. Social class effects were substantial. Children of laborers averaged half a standard deviation shorter than children of professionals. Family structure and socio-economic status had compounding impacts on children’s stature. PMID:28943749

  7. The Effect of Paid Leave on Maternal Mental Health.

    PubMed

    Mandal, Bidisha

    2018-06-07

    Objectives I examined the relationship between paid maternity leave and maternal mental health among women returning to work within 12 weeks of childbirth, after 12 weeks, and those returning specifically to full-time work within 12 weeks of giving birth. Methods I used data from 3850 women who worked full-time before childbirth from the Early Childhood Longitudinal Study-Birth Cohort. I utilized propensity score matching techniques to address selection bias. Mental health was measured using the Center for Epidemiologic Studies Depression (CESD) scale, with high scores indicating greater depressive symptoms. Results Returning to work after giving birth provided psychological benefits to women who used to work full-time before childbirth. The average CESD score of women who returned to work was 0.15 standard deviation (p < 0.01) lower than the average CESD score of all women who worked full-time before giving birth. Shorter leave, on the other hand, was associated with adverse effects on mental health. The average CESD score of women who returned within 12 weeks of giving birth was 0.13 standard deviation higher (p < 0.05) than the average CESD score of all women who rejoined labor market within 9 months of giving birth. However, receipt of paid leave was associated with an improved mental health outcome. Among all women who returned to work within 12 weeks of childbirth, those women who received some paid leave had a 0.17 standard deviation (p < 0.05) lower CESD score than the average CESD score. The result was stronger for women who returned to full-time work within 12 weeks of giving birth, with a 0.32 standard deviation (p < 0.01) lower CESD score than the average CESD score. Conclusions The study revealed that the negative psychological effect of early return to work after giving birth was alleviated when women received paid leave.

  8. Ambulatory blood pressure monitoring-derived short-term blood pressure variability in primary hyperparathyroidism.

    PubMed

    Concistrè, A; Grillo, A; La Torre, G; Carretta, R; Fabris, B; Petramala, L; Marinelli, C; Rebellato, A; Fallo, F; Letizia, C

    2018-04-01

    Primary hyperparathyroidism is associated with a cluster of cardiovascular manifestations, including hypertension, leading to increased cardiovascular risk. The aim of our study was to investigate the ambulatory blood pressure monitoring-derived short-term blood pressure variability in patients with primary hyperparathyroidism, in comparison with patients with essential hypertension and normotensive controls. Twenty-five patients with primary hyperparathyroidism (7 normotensive,18 hypertensive) underwent ambulatory blood pressure monitoring at diagnosis, and fifteen out of them were re-evaluated after parathyroidectomy. Short-term-blood pressure variability was derived from ambulatory blood pressure monitoring and calculated as the following: 1) Standard Deviation of 24-h, day-time and night-time-BP; 2) the average of day-time and night-time-Standard Deviation, weighted for the duration of the day and night periods (24-h "weighted" Standard Deviation of BP); 3) average real variability, i.e., the average of the absolute differences between all consecutive BP measurements. Baseline data of normotensive and essential hypertension patients were matched for age, sex, BMI and 24-h ambulatory blood pressure monitoring values with normotensive and hypertensive-primary hyperparathyroidism patients, respectively. Normotensive-primary hyperparathyroidism patients showed a 24-h weighted Standard Deviation (P < 0.01) and average real variability (P < 0.05) of systolic blood pressure higher than that of 12 normotensive controls. 24-h average real variability of systolic BP, as well as serum calcium and parathyroid hormone levels, were reduced in operated patients (P < 0.001). A positive correlation of serum calcium and parathyroid hormone with 24-h-average real variability of systolic BP was observed in the entire primary hyperparathyroidism patients group (P = 0.04, P  = 0.02; respectively). Systolic blood pressure variability is increased in normotensive patients with primary hyperparathyroidism and is reduced by parathyroidectomy, and may potentially represent an additional cardiovascular risk factor in this disease.

  9. Global Summary MGS TES Data and Mars-Gram Validation

    NASA Technical Reports Server (NTRS)

    Justus, C.; Johnson, D.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM 2001) is an engineering-level Mars atmosphere model widely used for many Mars mission applications. From 0-80 km, it is based on NASA Ames Mars General Circulation Model (MGCM), while above 80 km it is based on University of Arizona Mars Thermospheric General Circulation Model. Mars-GRAM 2001 and MGCM use surface topograph$ from Mars Global Surveyor Mars Orbiting Laser Altimeter (MOLA). Validation studies are described comparing Mars-GRAM with a global summary data set of Mars Global Surveyor Thermal Emission Spectrometer (TES) data. TES averages and standard deviations were assembled from binned TES data which covered surface to approx. 40 km, over more than a full Mars year (February, 1999 - June, 2001, just before start of a Mars global dust storm). TES data were binned in 10-by-10 degree latitude-longitude bins (i.e. 36 longitude bins by 19 latitude bins), 12 seasonal bins (based on 30 degree increments of Ls angle). Bin averages and standard deviations were assembled at 23 data levels (temperature at 21 pressure levels, plus surface temperature and surface pressure). Two time-of day bins were used: local time near 2 or 14 hours local time). Two dust optical depth bins wereused: infrared optical depth either less than or greater than 0.25 (which corresponds to visible optical depth either less than or greater than about 0.5). For interests in aerocapture and precision entry and landing, comparisons focused on atmospheric density. TES densities versus height were computed from TES temperature versus pressure, using assumptions of perfect gas law and hydrostatics. Mars-GRAM validation studies used density ratio (TES/Mars-GRAM) evaluated at data bin center points in space and time. Observed average TES/Mars-GRAM density ratios were generally 1+/-0.05, except at high altitudes (15-30 km, depending on season) and high latitudes (> 45 deg N), or at most altitudes in the southern hemisphere at Ls approx. 90 and 180deg). Compared to TES averages for a given latitude and season, TES data had average density standard deviation about the mean of approx. 65-10.5% (varying with height) for all data, or approx. 5-12%, depending on time of day and dust optical depth. Average standard deviation of TES/Mars-GRAM density ratio was 8.9% for local time 2 hours and 7.1% for local time 14 hours. Thus standard deviation of observed TES/Mars-GRAM density ratio, evaluated at matching positions and times, is about the same as the standard deviation of TES data about the TES mean value at a given position and season.

  10. A Robust Interpretation of Teaching Evaluation Ratings

    ERIC Educational Resources Information Center

    Bi, Henry H.

    2018-01-01

    There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…

  11. High-Throughput RNA Interference Screening: Tricks of the Trade

    PubMed Central

    Nebane, N. Miranda; Coric, Tatjana; Whig, Kanupriya; McKellip, Sara; Woods, LaKeisha; Sosa, Melinda; Sheppard, Russell; Rasmussen, Lynn; Bjornsti, Mary-Ann; White, E. Lucile

    2016-01-01

    The process of validating an assay for high-throughput screening (HTS) involves identifying sources of variability and developing procedures that minimize the variability at each step in the protocol. The goal is to produce a robust and reproducible assay with good metrics. In all good cell-based assays, this means coefficient of variation (CV) values of less than 10% and a signal window of fivefold or greater. HTS assays are usually evaluated using Z′ factor, which incorporates both standard deviation and signal window. A Z′ factor value of 0.5 or higher is acceptable for HTS. We used a standard HTS validation procedure in developing small interfering RNA (siRNA) screening technology at the HTS center at Southern Research. Initially, our assay performance was similar to published screens, with CV values greater than 10% and Z′ factor values of 0.51 ± 0.16 (average ± standard deviation). After optimizing the siRNA assay, we got CV values averaging 7.2% and a robust Z′ factor value of 0.78 ± 0.06 (average ± standard deviation). We present an overview of the problems encountered in developing this whole-genome siRNA screening program at Southern Research and how equipment optimization led to improved data quality. PMID:23616418

  12. How accurate is accident data in road safety research? An application of vehicle black box data regarding pedestrian-to-taxi accidents in Korea.

    PubMed

    Chung, Younshik; Chang, IlJoon

    2015-11-01

    Recently, the introduction of vehicle black box systems or in-vehicle video event data recorders enables the driver to use the system to collect more accurate crash information such as location, time, and situation at the pre-crash and crash moment, which can be analyzed to find the crash causal factors more accurately. This study presents the vehicle black box system in brief and its application status in Korea. Based on the crash data obtained from the vehicle black box system, this study analyzes the accuracy of the crash data collected from existing road crash data recording method, which has been recorded by police officers based on accident parties' statements or eyewitness's account. The analysis results show that the crash data observed by the existing method have an average of 84.48m of spatial difference and standard deviation of 157.75m as well as average 29.05min of temporal error and standard deviation of 19.24min. Additionally, the average and standard deviation of crash speed errors were found to be 9.03km/h and 7.21km/h, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Historical Precision of an Ozone Correction Procedure for AM0 Solar Cell Calibration

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Jenkins, Phillip; Scheiman, David

    2005-01-01

    In an effort to improve the accuracy of the high altitude aircraft method for calibration of high band-gap solar cells, the ozone correction procedure has been revisited. The new procedure adjusts the measured short circuit current, Isc, according to satellite based ozone measurements and a model of the atmospheric ozone profile then extrapolates the measurements to air mass zero, AMO. The purpose of this paper is to assess the precision of the revised procedure by applying it to historical data sets. The average Isc of a silicon cell for a flying season increased 0.5% and the standard deviation improved from 0.5% to 0.3%. The 12 year average Isc of a GaAs cell increased 1% and the standard deviation improved from 0.8% to 0.5%. The slight increase in measured Isc and improvement in standard deviation suggests that the accuracy of the aircraft method may improve from 1% to nearly 0.5%.

  14. Signal averaging limitations in heterodyne- and direct-detection laser remote sensing measurements

    NASA Technical Reports Server (NTRS)

    Menyuk, N.; Killinger, D. K.; Menyuk, C. R.

    1983-01-01

    The improvement in measurement uncertainty brought about by the averaging of increasing numbers of pulse return signals in both heterodyne- and direct-detection lidar systems is investigated. A theoretical analysis is presented which shows the standard deviation of the mean measurement to decrease as the inverse square root of the number of measurements, except in the presence of temporal correlation. Experimental measurements based on a dual-hybrid-TEA CO2 laser differential absorption lidar system are reported which demonstrate that the actual reduction in the standard deviation of the mean in both heterodyne- and direct-detection systems is much slower than the inverse square-root dependence predicted for uncorrelated signals, but is in agreement with predictions in the event of temporal correlation. Results thus favor the use of direct detection at relatively short range where the lower limit of the standard deviation of the mean is about 2 percent, but advantages of heterodyne detection at longer ranges are noted.

  15. Mars Global Reference Atmospheric Model (Mars-GRAM) and Database for Mission Design

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duvall, Aleta; Johnson, D. L.

    2003-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM 2001) is an engineering-level Mars atmosphere model widely used for many Mars mission applications. From 0-80 km, it is based on NASA Ames Mars General Circulation Model, while above 80 km it is based on Mars Thermospheric General Circulation Model. Mars-GRAM 2001 and MGCM use surface topography from Mars Global Surveyor Mars Orbiting Laser Altimeter. Validation studies are described comparing Mars-GRAM with Mars Global Surveyor Radio Science and Thermal Emission Spectrometer data. RS data from 2480 profiles were used, covering latitudes 75 deg S to 72 deg N, surface to approximately 40 km, for seasons ranging from areocentric longitude of Sun (Ls) = 70-160 deg and 265-310 deg. RS data spanned a range of local times, mostly 0-9 hours and 18-24 hours. For interests in aerocapture and precision landing, comparisons concentrated on atmospheric density. At a fixed height of 20 km, RS density varied by about a factor of 2.5 over ranges of latitudes and Ls values observed. Evaluated at matching positions and times, these figures show average RSMars-GRAM density ratios were generally 1+/-)0.05, except at heights above approximately 25 km and latitudes above approximately 50 deg N. Average standard deviation of RSMars-GRAM density ratio was 6%. TES data were used covering surface to approximately 40 km, over more than a full Mars year (February, 1999 - June, 2001, just before start of a Mars global dust storm). Depending on season, TES data covered latitudes 85 deg S to 85 deg N. Most TES data were concentrated near local times 2 hours and 14 hours. Observed average TES/Mars-GRAM density ratios were generally 1+/-0.05, except at high altitudes (15-30 km, depending on season) and high latitudes (greater than 45 deg N), or at most altitudes in the southern hemisphere at Ls approximately 90 and 180 deg. Compared to TES averages for a given latitude and season, TES data had average density standard deviation about the mean of approximately 2.5% for all data, or approximately 1-4%, depending on time of day and dust optical depth. Average standard deviation of TES/Mars-GRAM density ratio was 8.9% for local time 2 hours and 7.1% for local time 14 hours. Thus standard deviation of observed TES/Mars-GRAM density ratio, evaluated at matching positions and times, is about three times the standard deviation of TES data about the TES mean value at a given position and season.

  16. Lack of sensitivity of staffing for 8-hour sessions to standard deviation in daily actual hours of operating room time used for surgeons with long queues.

    PubMed

    Pandit, Jaideep J; Dexter, Franklin

    2009-06-01

    At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. When the mean actual hours of OR time used averages < or = 8 h 25 min, 8 h of staffing has higher OR efficiency than 10 h for all combinations of standard deviation and relative cost of over-run to under-run. When mean > or = 8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min < mean < 8 h 50 min, the economic break-even point depends on conditions. For example, break-even is: (a) 8 h 27 min for Weibull, standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is < or = 8 h 40 min and to staff for 10 h otherwise, performance was poor. For example, for the Weibull distribution with mean 8 h 40 min, standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages < or = 8 h 25 min, plan 8 h staffing. If average > or = 8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516).

  17. 7 CFR 31.400 - Samples for wool and wool top grades; method of obtaining.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... average and standard deviation of fiber diameter of the bulk sample are within the limits corresponding to... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS PURCHASE OF WOOL AND WOOL TOP SAMPLES § 31.400 Samples for wool...

  18. [Determination of acetochlor and oxyfluorfen by capillary gas chromatography].

    PubMed

    Xiang, Wen-Sheng; Wang, Xiang-Jing; Wang, Jing; Wang, Qing

    2002-09-01

    A method is described for the determination of acetochlor and oxyfluorfen by capillary gas chromatography with FID and an SE-30 capillary column (60 m x 0.53 mm i. d., 1.5 microm), using dibutyl phthalate as the internal standard. The standard deviations for acetochlor and oxyfluorfen concentration(mass fraction) were 0.44% and 0.47% respectively. The relative standard deviations for acetochlor and oxyfluorfen were 0.79% and 0.88% and the average recoveries for acetochlor and oxyfluorfen were 99.3% and 101.1% respectively. The method is simple, rapid and accurate.

  19. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  20. Design and preliminary assessment of Vanderbilt hand exoskeleton.

    PubMed

    Gasser, Benjamin W; Bennett, Daniel A; Durrough, Christina M; Goldfarb, Michael

    2017-07-01

    This paper presents the design of a hand exoskeleton intended to enable or facilitate bimanual activities of daily living (ADLs) for individuals with chronic upper extremity hemiparesis resulting from stroke. The paper describes design of the battery-powered, self-contained exoskeleton and presents the results of initial testing with a single subject with hemiparesis from stroke. Specifically, an experiment was conducted requiring the subject to repeatedly remove the lid from a water bottle both with and without the hand exoskeleton. The relative times required to remove the lid from the bottles was considerably lower when using the exoskeleton. Specifically, the average amount of time required to grasp the bottle with the paretic hand without the exoskeleton was 25.9 s, with a standard deviation of 33.5 s, while the corresponding average amount of time required to grasp the bottle with the exoskeleton was 5.1 s, with a standard deviation of 1.9 s. Thus, the task time involving the paretic hand was reduced by a factor of five, while the standard deviation was reduced by a factor of 16.

  1. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  2. 22st Annual National Test and Evaluation Conference

    DTIC Science & Technology

    2006-03-09

    B1 B2 y ii) Factor B affects the standard deviation C2 C1 y iii) Factor C affects the average and the standard deviation D1 = D2 y iv) Factor D has...22303 UNITED STATES (P) (703)862-0908 (F) (703)970-5700 poole_grady@emc.com Mr. Josh Pressnell RTI 8306 Rugby Rd. Manassas, VA 20111...Ricciardi RTI 8306 Rugby Rd. Manassas, VA 20111-1912 UNITED STATES (P) (703)365-9662 (F) (703)365-9818 michael.ricciardi@rti-world.com Mr

  3. Mars-Gram Validation with Mars Global Surveyor Data

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Johnson, D.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM 2001) is an engineering-level Mars atmosphere model widely used for many b4ars mission applications. From 0-80 km, it is based on NASA Ames Mars General Circulation Model (MGCM), while above 80 km it is based on University of Arizona Mars Thermospheric General Circulation Model. Mars-GRAM 2001 and MGCM use surface topography from Mars Global Surveyor Mars Orbiting Laser Altimeter (MOLA). Validation studies are described comparing Mars-GRAM with Mars Global Surveyor Radio Science (RS) and Thermal Emission Spectrometer (TES) data. RS data from 2480 profiles were used, covering latitudes 75deg S to 72deg N, surface to approx. 40 km, for seasons ranging from areocentric longitude of Sun (Ls) = 70-160deg and 265-310deg. RS data spanned a range of local times, mostly 0-9 hours and 18-24 hours. For interests in aerocapture and precision landing, comparisons concentrated on atmospheric density. At a fixed height of 20 km, measured RS density varied by about a factor of 2.5 over the range of latitudes and Ls values observed. Evaluated at matching positions and times, average RS/Mars-GRAM density ratios were generally lf0.05, except at heights above approx. 25 km and latitudes above approx.50deg N. Average standard deviation of RS/Mars-GRAM density ratio was 6%. TES data were used covering surface to approx. 40 km, over more than a full Mars year (February, 1999 - June, 2001, just before start of Mars global dust storm). Depending on season, TES data covered latitudes 85deg S to 85deg N. Most TES data were concentrated near local times 2 hours and 14 hours. Observed average TES/Mars-GRAM density ratios were generally 1+/-0.05, except at high altitudes (15-30 km, depending on season) and high latitudes (> 45deg N), or at most altitudes in the southern hemisphere at Ls approx. 90 and 180deg). Compared to TES averages for a given latitude and season, TES data had average density standard deviation about the mean of approx. 6.5-10.5% (varying with height) for all data, or approx. 5- 12%, depending on time of day and dust optical depth. Average standard deviation of TES/Mars-GRAM density ratio was 8.9% for local time 2 hours and 7.1% for 1o:al time 14 hours. Thus standard deviation of observed TES/Mars-GRAM density ratio, evaluated at matching positions and times, is about the same as the standard deviation of TES data about the TES mean value at a given position and season.

  4. Evaluation of a Test Article in the Salmonella typhimurium/Escherichia coli Plate Incorporation Mutation Assay in the Presence and Absence of Induced Rat Liver S-9. Test Article: N,N,N’,N’-tetramethyl ethanediamine (TMEDA)

    DTIC Science & Technology

    2008-06-12

    15 14 15 STANDARD DEVIATION (:1:) 3 5 4 4 4 MINIMUM VALUE 9 12 11 8 10 MAXIMUM VALUE 20 33 23 29 24 N" 56 19 14 38 28 IMm ~ ~ CORN Oil l:W...AVERAGE 9 9 10 8 9 STANDARD DEVIATION (:1:) 3 3 4 2 3 MINIMUM VAlUE 2 6 3 5 4 MAXIMUM VAlUE 20 16 23 12 15 N" 65 21 14 33 29 E.COLI DMSO ~ CORN ...21 33 25 21 23 N* 66 19 14 38 28 :wm ~ ACET CORN Oil !L!! SAUNE AVERAGE 9 10 10 9 8 STANDARD DEVlA.11ON (:I:) 3 3 4 3 2 MINIMUM VAWS . 4 6 6 6 3

  5. Aerosol Measurements in the Mid-Atlantic: Trends and Uncertainty

    NASA Astrophysics Data System (ADS)

    Hains, J. C.; Chen, L. A.; Taubman, B. F.; Dickerson, R. R.

    2006-05-01

    Elevated levels of PM2.5 are associated with cardiovascular and respiratory problems and even increased mortality rates. In 2002 we ran two commonly used PM2.5 speciation samplers (an IMPROVE sampler and an EPA sampler) in parallel at Fort Meade, Maryland (a suburban site located in the Baltimore- Washington urban corridor). The filters were analyzed at different labs. This experiment allowed us to calculate the 'real world' uncertainties associated with these instruments. The EPA method retrieved a January average PM2.5 mass of 9.3 μg/m3 with a standard deviation of 2.8 μg/m3, while the IMPROVE method retrieved an average mass of 7.3 μg/m3 with a standard deviation of 2.1 μg/m3. The EPA method retrieved a July average PM2.5 mass of 26.4 μg/m3 with a standard deviation of 14.6 μg/m3, while the IMPROVE method retrieved an average mass of 23.3 μg/m3 with a standard deviation of 13.0 μg/m3. We calculated a 5% uncertainty associated with the EPA and IMPROVE methods that accounts for uncertainties in flow control strategies and laboratory analysis. The RMS difference between the two methods in January was 2.1 μg/m3, which is about 25% of the monthly average mass and greater than the uncertainty we calculated. In July the RMS difference between the two methods was 5.2 μg/m3, about 20% of the monthly average mass, and greater than the uncertainty we calculated. The EPA methods retrieve consistently higher concentrations of PM2.5 than the IMPROVE methods on a daily basis in January and July. This suggests a systematic bias possibly resulting from contamination of either of the sampling methods. We reconstructed the mass and found that both samplers have good correlation between reconstructed and gravimetric mass, though the IMPROVE method has slightly better correlation than the EPA method. In January, organic carbon is the largest contributor to PM2.5 mass, and in July both sulfate and organic matter contribute substantially to PM2.5. Source apportionment models suggest that regional and local power plants are the major sources of sulfate, while mobile and vegetative burning factors are the major sources of organic carbon.

  6. Effects of insertion speed and trocar stiffness on the accuracy of needle position for brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.

    2012-04-15

    Purpose: In prostate brachytherapy, accurate positioning of the needle tip to place radioactive seeds at its target site is critical for successful radiation treatment. During the procedure, needle deflection leads to seed misplacement and suboptimal radiation dose to cancerous cells. In practice, radiation oncologists commonly use high-speed hand needle insertion to minimize displacement of the prostate as well as the needle deflection. Effects of speed during needle insertion and stiffness of trocar (a solid rod inside the hollow cannula) on needle deflection are studied. Methods: Needle insertion experiments into phantom were performed using a 2{sup 2} factorial design (2 parametersmore » at 2 levels), with each condition having replicates. Analysis of the deflection data included calculating the average, standard deviation, and analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: The stiffer tungsten carbide trocar is effective in reducing the average and standard deviation of needle deflection. The fast insertion speed together with the stiffer trocar generated the smallest average and standard deviation for needle deflection for almost all cases. Conclusions: The combination of stiff tungsten carbide trocar and fast needle insertion speed are important to decreasing needle deflection. The knowledge gained from this study can be used to improve the accuracy of needle insertion during brachytherapy procedures.« less

  7. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  8. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  9. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  10. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  11. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  12. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  13. Posttraumatic stress disorder and dementia in Holocaust survivors.

    PubMed

    Sperling, Wolfgang; Kreil, Sebastian Konstantin; Biermann, Teresa

    2011-03-01

    The incidence of mental and somatic sequelae has been shown to be very high in the group of people damaged by the Holocaust. Within the context of internal research, 93 Holocaust survivors suffering from posttraumatic stress disorder have been examined. Patients suffered on average from 4.5 (standard deviation ± 1.8) somatic diagnoses as well as 1.8 (standard deviation ± 0.5) psychiatric diagnoses. A diagnosis of dementia was ascertained according to ICD-10 criteria in 14%. Vascular dementia (66%) dominated over Alzheimer's dementia (23%) and other subtypes (11%).

  14. Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John

    Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less

  15. [A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].

    PubMed

    Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y

    2016-04-18

    To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.

  16. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, J. P.; McNamara, J.; Yorke, E.

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged imagesmore » for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction, seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean {+-} 1 standard deviation 4.8 {+-} 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 {+-} 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 {+-} 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 {+-} 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes-one showing tumor growth, the other showing large tumor displacement-that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 {+-} 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.« less

  17. Persistence of depressive symptoms and gait speed recovery in older adults after hip fracture.

    PubMed

    Rathbun, Alan M; Shardell, Michelle D; Stuart, Elizabeth A; Gruber-Baldini, Ann L; Orwig, Denise; Ostir, Glenn V; Hicks, Gregory E; Hochberg, Marc C; Magaziner, Jay

    2018-07-01

    Depression after hip fracture in older adults is associated with worse physical performance; however, depressive symptoms are dynamic, fluctuating during the recovery period. The study aim was to determine how the persistence of depressive symptoms over time cumulatively affects the recovery of physical performance. Marginal structural models estimated the cumulative effect of persistence of depressive symptoms on gait speed during hip fracture recovery among older adults (n = 284) enrolled in the Baltimore Hip Studies 7th cohort. Depressive symptoms at baseline and at 2-month and 6-month postadmission for hip fracture were evaluated by using the Center for Epidemiological Studies Depression Scale, and persistence of symptoms was assessed as a time-averaged severity lagged to standardized 3 m gait speed at 2, 6, and 12 months. A 1-unit increase in time-averaged Center for Epidemiological Studies Depression score was associated with a mean difference in gait speed of -0.0076 standard deviations (95% confidence interval [CI]: -0.0184, 0.0032; P = .166). The association was largest in magnitude from baseline to 6 months: -0.0144 standard deviations (95% CI: -0.0303, 0.0015; P = 0.076). Associations for the other time intervals were smaller: -0.0028 standard deviations (95% CI: -0.0138, 0.0083; P = .621) at 2 months and -0.0121 standard deviations (95% CI: -0.0324, 0.0082; P = .238) at 12 months. Although not statistically significant, the magnitude of the numerical estimates suggests that expressing more depressive symptoms during the first 6 months after hip fracture has a meaningful impact on functional recovery. Copyright © 2018 John Wiley & Sons, Ltd.

  18. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... donation rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  19. Non-Contact Determination of Antisymmetric Plate Wave Velocity in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Kautz, Harold E.

    1996-01-01

    A 13 mJ NdYAG 1064 nm, 4 ns, laser pulse was employed to produce ultrasonic plate waves in 20 percent porous SiC/SiC composite tensile specimens of three different architectures. An air coupled 0.5 MHz transducer was used to detect and collect the waveforms which contained first antisymmetric plate wave pulses for determining the shear wave velocity (VS). These results were compared to VS values determined on the same specimens with 0.5 MHz ultrasonic transducers with contact coupling. Averages of four noncontact determinations on each of 18 specimens were compared to averages of four contact values. The noncontact VS's fall in the same range as the contact. The standard deviations for the noncontact VS's averaged 2.8 percent. The standard deviations for the contact measurements averaged 2.3 percent, indicating similar reproducibility. Repeated laser pulsing at the same location always lead to deterioration of the ulu-"nic signal. The signal would recover in about 24 hr in air however, indicating that no permanent damage was produced.

  20. Stability Analysis of Receiver ISB for BDS/GPS

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Hao, J. M.; Tian, Y. G.; Yu, H. L.; Zhou, Y. L.

    2017-07-01

    Stability analysis of receiver ISB (Inter-System Bias) is essential for understanding the feature of ISB as well as the ISB modeling and prediction. In order to analyze the long-term stability of ISB, the data from MGEX (Multi-GNSS Experiment) covering 3 weeks, which are from 2014, 2015 and 2016 respectively, are processed with the precise satellite clock and orbit products provided by Wuhan University and GeoForschungsZentrum (GFZ). Using the ISB calculated by BDS (BeiDou Navigation Satellite System)/GPS (Global Positioning System) combined PPP (Precise Point Positioning), the daily stability and weekly stability of ISB are investigated. The experimental results show that the diurnal variation of ISB is stable, and the average of daily standard deviation is about 0.5 ns. The weekly averages and standard deviations of ISB vary greatly in different years. The weekly averages of ISB are relevant to receiver types. There is a system bias between ISB calculated from the precise products provided by Wuhan University and GFZ. In addition, the system bias of the weekly average ISB of different stations is consistent with each other.

  1. Distribution Development for STORM Ingestion Input Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr tomore » a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e -4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e -4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)« less

  2. TU-G-BRD-04: A Round Robin Dosimetry Intercomparison of Gamma Stereotactic Radiosurgery Calibration Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drzymala, R; Alvarez, P; Bednarz, G

    2015-06-15

    Purpose: The purpose of this multi-institutional study was to compare two new gamma stereotactic radiosurgery (GSRS) dosimetry protocols to existing calibration methods. The ultimate goal was to guide AAPM Task Group 178 in recommending a standard GSRS dosimetry protocol. Methods: Nine centers (ten GSRS units) participated in the study. Each institution made eight sets of dose rate measurements: six with two different ionization chambers in three different 160mm-diameter spherical phantoms (ABS plastic, Solid Water and liquid water), and two using the same ionization chambers with a custom in-air positioning jig. Absolute dose rates were calculated using a newly proposed formalismmore » by the IAEA working group for small and non-standard radiation fields and with a new air-kerma based protocol. The new IAEA protocol requires an in-water ionization chamber calibration and uses previously reported Monte-Carlo generated factors to account for the material composition of the phantom, the type of ionization chamber, and the unique GSRS beam configuration. Results obtained with the new dose calibration protocols were compared to dose rates determined by the AAPM TG-21 and TG-51 protocols, with TG-21 considered as the standard. Results: Averaged over all institutions, ionization chambers and phantoms, the mean dose rate determined with the new IAEA protocol relative to that determined with TG-21 in the ABS phantom was 1.000 with a standard deviation of 0.008. For TG-51, the average ratio was 0.991 with a standard deviation of 0.013, and for the new in-air formalism it was 1.008 with a standard deviation of 0.012. Conclusion: Average results with both of the new protocols agreed with TG-21 to within one standard deviation. TG-51, which does not take into account the unique GSRS beam configuration or phantom material, was not expected to perform as well as the new protocols. The new IAEA protocol showed remarkably good agreement with TG-21. Conflict of Interests: Paula Petti, Josef Novotny, Gennady Neyman and Steve Goetsch are consultants for Elekta Instrument A/B; Elekta Instrument AB, PTW Freiburg GmbH, Standard Imaging, Inc., and The Phantom Laboratory, Inc. loaned equipment for use in these experiments; The University of Wisconsin Accredited Dosimetry Calibration Laboratory provided calibration services.« less

  3. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  4. Shear-stress fluctuations and relaxation in polymer glasses

    NASA Astrophysics Data System (ADS)

    Kriuchevskyi, I.; Wittmer, J. P.; Meyer, H.; Benzerara, O.; Baschnagel, J.

    2018-01-01

    We investigate by means of molecular dynamics simulation a coarse-grained polymer glass model focusing on (quasistatic and dynamical) shear-stress fluctuations as a function of temperature T and sampling time Δ t . The linear response is characterized using (ensemble-averaged) expectation values of the contributions (time averaged for each shear plane) to the stress-fluctuation relation μsf for the shear modulus and the shear-stress relaxation modulus G (t ) . Using 100 independent configurations, we pay attention to the respective standard deviations. While the ensemble-averaged modulus μsf(T ) decreases continuously with increasing T for all Δ t sampled, its standard deviation δ μsf(T ) is nonmonotonic with a striking peak at the glass transition. The question of whether the shear modulus is continuous or has a jump singularity at the glass transition is thus ill posed. Confirming the effective time-translational invariance of our systems, the Δ t dependence of μsf and related quantities can be understood using a weighted integral over G (t ) .

  5. The production of calibration specimens for impact testing of subsize Charpy specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, D.J.; Corwin, W.R.; Owings, T.D.

    1994-09-01

    Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less

  6. Strategies to Prevent MRSA Transmission in Community-Based Nursing Homes: A Cost Analysis.

    PubMed

    Roghmann, Mary-Claire; Lydecker, Alison; Mody, Lona; Mullins, C Daniel; Onukwugha, Eberechukwu

    2016-08-01

    OBJECTIVE To estimate the costs of 3 MRSA transmission prevention scenarios compared with standard precautions in community-based nursing homes. DESIGN Cost analysis of data collected from a prospective, observational study. SETTING AND PARTICIPANTS Care activity data from 401 residents from 13 nursing homes in 2 states. METHODS Cost components included the quantities of gowns and gloves, time to don and doff gown and gloves, and unit costs. Unit costs were combined with information regarding the type and frequency of care provided over a 28-day observation period. For each scenario, the estimated costs associated with each type of care were summed across all residents to calculate an average cost and standard deviation for the full sample and for subgroups. RESULTS The average cost for standard precautions was $100 (standard deviation [SD], $77) per resident over a 28-day period. If gown and glove use for high-risk care was restricted to those with MRSA colonization or chronic skin breakdown, average costs increased to $137 (SD, $120) and $125 (SD, $109), respectively. If gowns and gloves were used for high-risk care for all residents in addition to standard precautions, the average cost per resident increased substantially to $223 (SD, $127). CONCLUSIONS The use of gowns and gloves for high-risk activities with all residents increased the estimated cost by 123% compared with standard precautions. This increase was ameliorated if specific subsets (eg, those with MRSA colonization or chronic skin breakdown) were targeted for gown and glove use for high-risk activities. Infect Control Hosp Epidemiol 2016;37:962-966.

  7. The Perceptions of Standardized Tests, Academic Self-Efficacy, and Academic Performance of African American Graduate Students: a Correlational and Comparative Analysis

    ERIC Educational Resources Information Center

    Marrah, Arleezah K.

    2012-01-01

    The academic performance of African American students continues to be a concern for educators, researchers, and most importantly their community. This issue is particularly prevalent in the standardized test scores of African American students where they score on average one or more standard deviations below their Caucasian and Asian American…

  8. Combustion characteristics of paper and sewage sludge in a pilot-scale fluidized bed.

    PubMed

    Yu, Yong-Ho; Chung, Jinwook

    2015-01-01

    This study characterizes the combustion of paper and sewage sludge in a pilot-scale fluidized bed. The highest temperature during combustion within the system was found at the surface of the fluidized bed. Paper sludge containing roughly 59.8% water was burned without auxiliary fuel, but auxiliary fuel was required to incinerate the sewage sludge, which contained about 79.3% water. The stability of operation was monitored based on the average pressure and the standard deviation of pressure fluctuations. The average pressure at the surface of the fluidized bed decreased as the sludge feed rate increased. However, the standard deviation of pressure fluctuations increased as the sludge feed rate increased. Finally, carbon monoxide (CO) emissions decreased as oxygen content increased in the flue gas, and nitrogen oxide (NOx) emissions were also tied with oxygen content.

  9. Acoustic analysis of speech variables during depression and after improvement.

    PubMed

    Nilsonne, A

    1987-09-01

    Speech recordings were made of 16 depressed patients during depression and after clinical improvement. The recordings were analyzed using a computer program which extracts acoustic parameters from the fundamental frequency contour of the voice. The percent pause time, the standard deviation of the voice fundamental frequency distribution, the standard deviation of the rate of change of the voice fundamental frequency and the average speed of voice change were found to correlate to the clinical state of the patient. The mean fundamental frequency, the total reading time and the average rate of change of the voice fundamental frequency did not differ between the depressed and the improved group. The acoustic measures were more strongly correlated to the clinical state of the patient as measured by global depression scores than to single depressive symptoms such as retardation or agitation.

  10. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    PubMed

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  11. Comparison of Low Cost Photogrammetric Survey with Tls and Leica Pegasus Backpack 3d Modelss

    NASA Astrophysics Data System (ADS)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Piragnolo, M.; Vettore, A.

    2017-11-01

    This paper considers Leica backpack and photogrammetric surveys of a mediaeval bastion in Padua, Italy. Furhtermore, terrestrial laser scanning (TLS) survey is considered in order to provide a state of the art reconstruction of the bastion. Despite control points are typically used to avoid deformations in photogrammetric surveys and ensure correct scaling of the reconstruction, in this paper a different approach is considered: this work is part of a project aiming at the development of a system exploiting ultra-wide band (UWB) devices to provide correct scaling of the reconstruction. In particular, low cost Pozyx UWB devices are used to estimate camera positions during image acquisitions. Then, in order to obtain a metric reconstruction, scale factor in the photogrammetric survey is estimated by comparing camera positions obtained from UWB measurements with those obtained from photogrammetric reconstruction. Compared with the TLS survey, the considered photogrammetric model of the bastion results in a RMSE of 21.9cm, average error 13.4cm, and standard deviation 13.5cm. Excluding the final part of the bastion left wing, where the presence of several poles make reconstruction more difficult, (RMSE) fitting error is 17.3cm, average error 11.5cm, and standard deviation 9.5cm. Instead, comparison of Leica backpack and TLS surveys leads to an average error of 4.7cm and standard deviation 0.6cm (4.2cm and 0.3cm, respectively, by excluding the final part of the left wing).

  12. Photometric Selection of a Massive Galaxy Catalog with z ≥ 0.55

    NASA Astrophysics Data System (ADS)

    Núñez, Carolina; Spergel, David N.; Ho, Shirley

    2017-02-01

    We present the development of a photometrically selected massive galaxy catalog, targeting Luminous Red Galaxies (LRGs) and massive blue galaxies at redshifts of z≥slant 0.55. Massive galaxy candidates are selected using infrared/optical color-color cuts, with optical data from the Sloan Digital Sky Survey (SDSS) and infrared data from “unWISE” forced photometry derived from the Wide-field Infrared Survey Explorer (WISE). The selection method is based on previously developed techniques to select LRGs with z> 0.5, and is optimized using receiver operating characteristic curves. The catalog contains 16,191,145 objects, selected over the full SDSS DR10 footprint. The redshift distribution of the resulting catalog is estimated using spectroscopic redshifts from the DEEP2 Galaxy Redshift Survey and photometric redshifts from COSMOS. Restframe U - B colors from DEEP2 are used to estimate LRG selection efficiency. Using DEEP2, the resulting catalog has an average redshift of z = 0.65, with a standard deviation of σ =2.0, and an average restframe of U-B=1.0, with a standard deviation of σ =0.27. Using COSMOS, the resulting catalog has an average redshift of z = 0.60, with a standard deviation of σ =1.8. We estimate 34 % of the catalog to be blue galaxies with z≥slant 0.55. An estimated 9.6 % of selected objects are blue sources with redshift z< 0.55. Stellar contamination is estimated to be 1.8%.

  13. The Cost of Uncertain Life Span*

    PubMed Central

    Edwards, Ryan D.

    2012-01-01

    A considerable amount of uncertainty surrounds the length of human life. The standard deviation in adult life span is about 15 years in the U.S., and theory and evidence suggest it is costly. I calibrate a utility-theoretic model of preferences over length of life and show that one fewer year in standard deviation is worth about half a mean life year. Differences in the standard deviation exacerbate cross-sectional differences in life expectancy between the U.S. and other industrialized countries, between rich and poor countries, and among poor countries. Accounting for the cost of life-span variance also appears to amplify recently discovered patterns of convergence in world average human well-being. This is partly for methodological reasons and partly because unconditional variance in human length of life, primarily the component due to infant mortality, has exhibited even more convergence than life expectancy. PMID:22368324

  14. Lead-lag relationships between stock and market risk within linear response theory

    NASA Astrophysics Data System (ADS)

    Borysov, Stanislav; Balatsky, Alexander

    2015-03-01

    We study historical correlations and lead-lag relationships between individual stock risks (standard deviation of daily stock returns) and market risk (standard deviation of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over stocks, using historical stock prices from the Standard & Poor's 500 index for 1994-2013. The observed historical dynamics suggests that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining at that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when individual stock risks affect market risk and vice versa. This work was supported by VR 621-2012-2983.

  15. Standard Deviation of Spatially-Averaged Surface Cross Section Data from the TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Jones, Jeffrey A.

    2010-01-01

    We investigate the spatial variability of the normalized radar cross section of the surface (NRCS or Sigma(sup 0)) derived from measurements of the TRMM Precipitation Radar (PR) for the period from 1998 to 2009. The purpose of the study is to understand the way in which the sample standard deviation of the Sigma(sup 0) data changes as a function of spatial resolution, incidence angle, and surface type (land/ocean). The results have implications regarding the accuracy by which the path integrated attenuation from precipitation can be inferred by the use of surface scattering properties.

  16. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  17. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  18. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  19. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  20. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  1. Development and verification of a novel device for dental intra-oral 3D scanning using chromatic confocal technology

    NASA Astrophysics Data System (ADS)

    Zint, M.; Stock, K.; Graser, R.; Ertl, T.; Brauer, E.; Heyninck, J.; Vanbiervliet, J.; Dhondt, S.; De Ceuninck, P.; Hibst, R.

    2015-03-01

    The presented work describes the development and verification of a novel optical, powder-free intra-oral scanner based on chromatic confocal technology combined with a multifocal approach. The proof of concept for a chromatic confocal area scanner for intra-oral scanning is given. Several prototype scanners passed a verification process showing an average accuracy (distance deviation on flat surfaces) of less than 31μm +/- 21μm and a reproducibility of less than 4μm +/- 3μm. Compared to a tactile measurement on a full jaw model fitted with 4mm ceramic spheres the measured average distance deviation between the spheres was 49μm +/- 12μm for scans of up to 8 teeth (3- unit bridge, single Quadrant) and 104μm +/- 82μm for larger scans and full jaws. The average deviation of the measured sphere diameter compared to the tactile measurement was 27μm +/- 14μm. Compared to μCT scans of plaster models equipped with human teeth the average standard deviation on up to 3 units was less than 55μm +/- 49μm whereas the reproducibility of the scans was better than 22μm +/- 10μm.

  2. Simultaneous Determination of Total Vitamins B1, B2, B3, and B6 in Infant Formula and Related Nutritionals by Enzymatic Digestion and LC-MS/MS: Single-Laboratory Validation, First Action 2015.14.

    PubMed

    Salvati, Louis M; McClure, Sean C; Reddy, Todime M; Cellar, Nicholas A

    2016-05-01

    This method provides simultaneous determination of total vitamins B1, B2, B3, and B6 in infant formula and related nutritionals (adult and infant). The method was given First Action for vitamins B1, B2, and B6, but not B3, during the AOAC Annual Meeting in September 2015. The method uses acid phosphatase to dephosphorylate the phosphorylated vitamin forms. It then measures thiamine (vitamin B1); riboflavin (vitamin B2); nicotinamide and nicotinic acid (vitamin B3); and pyridoxine, pyridoxal, and pyridoxamine (vitamin B6) from digested sample extract by liquid chromatography-tandem mass spectrometry. A single-laboratory validation was performed on 14 matrixes provided by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals (SPIFAN) to demonstrate method effectiveness. The method met requirements of the AOAC SPIFAN Standard Method Performance Requirement for each of the three vitamins, including average over-spike recovery of 99.6 ± 3.5%, average repeatability of 1.5 ± 0.8% relative standard deviation, and average intermediate precision of 3.9 ± 1.3% relative standard deviation.

  3. Scanning laser polarimetry in eyes with exfoliation syndrome.

    PubMed

    Dimopoulos, Antonios T; Katsanos, Andreas; Mikropoulos, Dimitrios G; Giannopoulos, Theodoros; Empeslidis, Theodoros; Teus, Miguel A; Holló, Gábor; Konstas, Anastasios G P

    2013-01-01

    To compare retinal nerve fiber layer thickness (RNFLT) of normotensive eyes with exfoliation syndrome (XFS) and healthy eyes.
 Sixty-four consecutive individuals with XFS and normal office-time intraocular pressure (IOP) and 72 consecutive healthy controls were prospectively enrolled for a cross-sectional analysis in this hospital-based observational study. The GDx-VCC parameters (temporal-superior-nasal-inferior-temporal [TSNIT] average, superior average, inferior average, TSNIT standard deviation (SD), and nerve fiber indicator [NFI]) were compared between groups. Correlation between various clinical parameters and RNFLT parameters was investigated with Spearman coefficient. 
 The NFI, although within normal limits for both groups, was significantly greater in the XFS group compared to controls: the respective median and interquartile range (IQR) values were 25.1 (22.0-29.0) vs 15.0 (12.0-20.0), p<0.001. In the XFS group, all RNFLT values were significantly lower compared to controls (p<0.001). However, they were all within the normal clinical ranges for both groups: TSNIT average median (IQR): 52.8 (49.7-55.7) vs 56.0 (53.0-59.3) µm; superior average mean (SD): 62.3 (6.7) vs 68.8 (8.2) µm; inferior average mean (SD): 58.0 (7.2) vs 64.8 (7.7) µm, respectively. TSNIT SD was significantly lower in the XFS group, median (IQR): 18.1 (15.4-20.4) vs 21.0 (18.4-23.8), p<0.001. There was no systematic relationship between RNFLT and visual acuity, cup-to-disc ratio, IOP, central corneal thickness, Humphrey mean deviation, and pattern standard deviation in either group. 
 Compared to control eyes, polarimetry-determined RNFLT was lower in XFS eyes with normal IOP. Therefore, close monitoring of RNFLT may facilitate early identification of those XFS eyes that convert to exfoliative glaucoma.

  4. Quantitative Analysis of Ca, Mg, and K in the Roots of Angelica pubescens f. biserrata by Laser-Induced Breakdown Spectroscopy Combined with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.

    2018-03-01

    Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.

  5. Fluorescein thiocarbamyl amino acids as internal standards for migration time correction in capillary sieving electrophoresis

    PubMed Central

    Pugsley, Haley R.; Swearingen, Kristian E.; Dovichi, Norman J.

    2009-01-01

    A number of algorithms have been developed to correct for migration time drift in capillary electrophoresis. Those algorithms require identification of common components in each run. However, not all components may be present or resolved in separations of complex samples, which can confound attempts for alignment. This paper reports the use of fluorescein thiocarbamyl derivatives of amino acids as internal standards for alignment of 3-(2-furoyl)quinoline-2-carboxaldehyde (FQ)-labeled proteins in capillary sieving electrophoresis. The fluorescein thiocarbamyl derivative of aspartic acid migrates before FQ-labeled proteins and the fluorescein thiocarbamyl derivative of arginine migrates after the FQ-labeled proteins. These compounds were used as internal standards to correct for variations in migration time over a two-week period in the separation of a cellular homogenate. The experimental conditions were deliberately manipulated by varying electric field and sample preparation conditions. Three components of the homogenate were used to evaluate the alignment efficiency. Before alignment, the average relative standard deviation in migration time for these components was 13.3%. After alignment, the average relative standard deviation in migration time for these components was reduced to 0.5%. PMID:19249052

  6. A standardized model for predicting flap failure using indocyanine green dye

    NASA Astrophysics Data System (ADS)

    Zimmermann, Terence M.; Moore, Lindsay S.; Warram, Jason M.; Greene, Benjamin J.; Nakhmani, Arie; Korb, Melissa L.; Rosenthal, Eben L.

    2016-03-01

    Techniques that provide a non-invasive method for evaluation of intraoperative skin flap perfusion are currently available but underutilized. We hypothesize that intraoperative vascular imaging can be used to reliably assess skin flap perfusion and elucidate areas of future necrosis by means of a standardized critical perfusion threshold. Five animal groups (negative controls, n=4; positive controls, n=5; chemotherapy group, n=5; radiation group, n=5; chemoradiation group, n=5) underwent pre-flap treatments two weeks prior to undergoing random pattern dorsal fasciocutaneous flaps with a length to width ratio of 2:1 (3 x 1.5 cm). Flap perfusion was assessed via laser-assisted indocyanine green dye angiography and compared to standard clinical assessment for predictive accuracy of flap necrosis. For estimating flap-failure, clinical prediction achieved a sensitivity of 79.3% and a specificity of 90.5%. When average flap perfusion was more than three standard deviations below the average flap perfusion for the negative control group at the time of the flap procedure (144.3+/-17.05 absolute perfusion units), laser-assisted indocyanine green dye angiography achieved a sensitivity of 81.1% and a specificity of 97.3%. When absolute perfusion units were seven standard deviations below the average flap perfusion for the negative control group, specificity of necrosis prediction was 100%. Quantitative absolute perfusion units can improve specificity for intraoperative prediction of viable tissue. Using this strategy, a positive predictive threshold of flap failure can be standardized for clinical use.

  7. Phase Transition in Protocols Minimizing Work Fluctuations

    NASA Astrophysics Data System (ADS)

    Solon, Alexandre P.; Horowitz, Jordan M.

    2018-05-01

    For two canonical examples of driven mesoscopic systems—a harmonically trapped Brownian particle and a quantum dot—we numerically determine the finite-time protocols that optimize the compromise between the standard deviation and the mean of the dissipated work. In the case of the oscillator, we observe a collection of protocols that smoothly trade off between average work and its fluctuations. However, for the quantum dot, we find that as we shift the weight of our optimization objective from average work to work standard deviation, there is an analog of a first-order phase transition in protocol space: two distinct protocols exchange global optimality with mixed protocols akin to phase coexistence. As a result, the two types of protocols possess qualitatively different properties and remain distinct even in the infinite duration limit: optimal-work-fluctuation protocols never coalesce with the minimal-work protocols, which therefore never become quasistatic.

  8. Interplanetary medium data book, appendix

    NASA Technical Reports Server (NTRS)

    King, J. H.

    1977-01-01

    Computer generated listings of hourly average interplanetary plasma and magnetic field parameters are given. Parameters include proton temperature, proton density, bulk speed, an identifier of the source of the plasma data for the hour, average magnetic field magnitude and cartesian components of the magnetic field. Also included are longitude and latitude angles of the vector made up of the average field components, a vector standard deviation, and an identifier of the source of magnetic field data.

  9. Does Television Rot Your Brain? New Evidence from the Coleman Study. NBER Working Paper No. 12021

    ERIC Educational Resources Information Center

    Gentzkow, Matthew; Shapiro, Jesse M.

    2006-01-01

    We use heterogeneity in the timing of television's introduction to different local markets to identify the effect of preschool television exposure on standardized test scores later in life. Our preferred point estimate indicates that an additional year of preschool television exposure raises average test scores by about .02 standard deviations. We…

  10. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  11. [5-year course of dyslexia – Persistence, sex effects, performance in reading and spelling, and school-related success].

    PubMed

    Wyschkon, Anne; Schulz, Franziska; Gallit, Finja Sunnyi; Poltz, Nadine; Kohn, Juliane; Moraske, Svenja; Bondü, Rebecca; von Aster, Michael; Esser, Günter

    2018-03-01

    The study examines the 5-year course of children with dyslexia with regard to their sex. Furthermore, the study investigates the impact of dyslexia on the performance in reading and spelling skills and school-related success. A group of 995 6- to 16-year-olds were examined at the initial assessment. Part of the initial sample was then re-examined after 43 and 63 months. The diagnosis of dyslexia was based on the double discrepancy criterion using a standard deviation of 1.5. Though they had no intellectual deficits, the children showed a considerable discrepancy between their reading or writing abilities and (1) their nonverbal intelligence and (2) the mean of their grade norm. Nearly 70 % of those examined had a persisting diagnosis of dyslexia over a period of 63 months. The 5-year course was not influenced by sex. Despite average intelligence, the performance in writing and spelling of children suffering from dyslexia was one standard deviation below a control group without dyslexia with average intelligence and 0.5 standard deviations below a group of children suffering from intellectual deficits. Furthermore, the school-related success of the dyslexics was significantly lower than those of children with average intelligence. Dyslexics showed similar school-related success rates to children suffering from intellectual deficits. Dyslexia represents a considerable developmental risk. The adverse impact of dyslexia on school-related success supports the importance of early diagnostics and intervention. It also underlines the need for reliable and general accepted diagnostic criteria. It is important to define such criteria in light of the prevalence rates.

  12. The joint use of the tangential electric field and surface Laplacian in EEG classification.

    PubMed

    Carvalhaes, C G; de Barros, J Acacio; Perreau-Guimaraes, M; Suppes, P

    2014-01-01

    We investigate the joint use of the tangential electric field (EF) and the surface Laplacian (SL) derivation as a method to improve the classification of EEG signals. We considered five classification tasks to test the validity of such approach. In all five tasks, the joint use of the components of the EF and the SL outperformed the scalar potential. The smallest effect occurred in the classification of a mental task, wherein the average classification rate was improved by 0.5 standard deviations. The largest effect was obtained in the classification of visual stimuli and corresponded to an improvement of 2.1 standard deviations.

  13. CLUSFAVOR 5.0: hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles

    PubMed Central

    Peterson, Leif E

    2002-01-01

    CLUSFAVOR (CLUSter and Factor Analysis with Varimax Orthogonal Rotation) 5.0 is a Windows-based computer program for hierarchical cluster and principal-component analysis of microarray-based transcriptional profiles. CLUSFAVOR 5.0 standardizes input data; sorts data according to gene-specific coefficient of variation, standard deviation, average and total expression, and Shannon entropy; performs hierarchical cluster analysis using nearest-neighbor, unweighted pair-group method using arithmetic averages (UPGMA), or furthest-neighbor joining methods, and Euclidean, correlation, or jack-knife distances; and performs principal-component analysis. PMID:12184816

  14. The geometry of proliferating dicot cells.

    PubMed

    Korn, R W

    2001-02-01

    The distributions of cell size and cell cycle duration were studied in two-dimensional expanding plant tissues. Plastic imprints of the leaf epidermis of three dicot plants, jade (Crassula argentae), impatiens (Impatiens wallerana), and the common begonia (Begonia semperflorens) were made and cell outlines analysed. The average, standard deviation and coefficient of variance (CV = 100 x standard deviation/average) of cell size were determined with the CV of mother cells less than the CV for daughter cells and both are less than that for all cells. An equation was devised as a simple description of the probability distribution of sizes for all cells of a tissue. Cell cycle durations as measured in arbitrary time units were determined by reconstructing the initial and final sizes of cells and they collectively give the expected asymmetric bell-shaped probability distribution. Given the features of unequal cell division (an average of 11.6% difference in size of daughter cells) and the size variation of dividing cells, it appears that the range of cell size is more critically regulated than the size of a cell at any particular time.

  15. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques.

    PubMed

    Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego

    2010-11-01

    Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.

  16. Active laser ranging with frequency transfer using frequency comb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongyuan; Wei, Haoyun; Yang, Honglei

    2016-05-02

    A comb-based active laser ranging scheme is proposed for enhanced distance resolution and a common time standard for the entire system. Three frequency combs with different repetition rates are used as light sources at the two ends where the distance is measured. Pulse positions are determined through asynchronous optical sampling and type II second harmonic generation. Results show that the system achieves a maximum residual of 379.6 nm and a standard deviation of 92.9 nm with 2000 averages over 23.6 m. Moreover, as for the frequency transfer, an atom clock and an adjustable signal generator, synchronized to the atom clock, are used asmore » time standards for the two ends to appraise the frequency deviation introduced by the proposed system. The system achieves a residual fractional deviation of 1.3 × 10{sup −16} for 1 s, allowing precise frequency transfer between the two clocks at the two ends.« less

  17. Investigation of the Statistics of Pure Tone Sound Power Injection from Low Frequency, Finite Sized Sources in a Reverberant Room

    NASA Technical Reports Server (NTRS)

    Smith, Wayne Farrior

    1973-01-01

    The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.

  18. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of chromium in water by graphite furnace atomic absorption spectrophotometry

    USGS Publications Warehouse

    McLain, B.J.

    1993-01-01

    Graphite furnace atomic absorption spectrophotometry is a sensitive, precise, and accurate method for the determination of chromium in natural water samples. The detection limit for this analytical method is 0.4 microg/L with a working linear limit of 25.0 microg/L. The precision at the detection limit ranges from 20 to 57 percent relative standard deviation (RSD) with an improvement to 4.6 percent RSD for concentrations more than 3 microg/L. Accuracy of this method was determined for a variety of reference standards that was representative of the analytical range. The results were within the established standard deviations. Samples were spiked with known concentrations of chromium with recoveries ranging from 84 to 122 percent. In addition, a comparison of data between graphite furnace atomic absorption spectrophotometry and direct-current plasma atomic emission spectrometry resulted in suitable agreement between the two methods, with an average deviation of +/- 2.0 microg/L throughout the analytical range.

  19. Virtual reality technology prevents accidents in extreme situations

    NASA Astrophysics Data System (ADS)

    Badihi, Y.; Reiff, M. N.; Beychok, S.

    2012-03-01

    This research is aimed at examining the added value of using Virtual Reality (VR) in a driving simulator to prevent road accidents, specifically by improving drivers' skills when confronted with extreme situations. In an experiment, subjects completed a driving scenario using two platforms: A 3-D Virtual Reality display system using an HMD (Head-Mounted Display), and a standard computerized display system based on a standard computer monitor. The results show that the average rate of errors (deviating from the driving path) in a VR environment is significantly lower than in the standard one. In addition, there was no compensation between speed and accuracy in completing the driving mission. On the contrary: The average speed was even slightly faster in the VR simulation than in the standard environment. Thus, generally, despite the lower rate of deviation in VR setting, it is not achieved by driving slower. When the subjects were asked about their personal experiences from the training session, most of the subjects responded that among other things, the VR session caused them to feel a higher sense of commitment to the task and their performance. Some of them even stated that the VR session gave them a real sensation of driving.

  20. Heavy Ozone Enrichments from ATMOS Infrared Solar Spectra

    NASA Technical Reports Server (NTRS)

    Irion, F. W.; Gunson, M. R.; Rinsland, C. P.; Yung, Y. L.; Abrams, M. C.; Chang, A. Y.; Goldman, A.

    1996-01-01

    Vertical enrichment profiles of stratospheric O-16O-16O-18 and O-16O-18O-16 (hereafter referred to as (668)O3 and (686)O3 respectively) have been derived from space-based solar occultation spectra recorded at 0.01 cm(exp-1) resolution by the ATMOS (Atmospheric Trace MOlecule Spectroscopy) Fourier transform infrared (FTIR) spectrometer. The observations, made during the Spacelab 3 and ATLAS-1, -2, and -3 shuttle missions, cover polar, mid-latitude and tropical regions between 26 to 2.6 mb inclusive (approximately 25 to 41 km). Average enrichments, weighted by molecular (48)O3 density, of (15 +/- 6)% were found for (668)O3 and (10 +/- 7)% for (686)O3. Defining the mixing ratio of (50)O3 as the sum of those for (668)O3 and (686)O3, an enrichment of (13 plus or minus 5)% was found for (50)O3 (1 sigma standard deviation). No latitudinal or vertical gradients were found outside this standard deviation. From a series of ground-based measurements by the ATMOS instrument at Table Mountain, California (34.4 deg N), an average total column (668)O3 enrichment of (17 +/- 4)% (1 sigma standard deviation) was determined, with no significant seasonal variation discernable. Possible biases in the spectral intensities that affect the determination of absolute enrichments are discussed.

  1. An adaptive beamforming method for ultrasound imaging based on the mean-to-standard-deviation factor.

    PubMed

    Wang, Yuanguo; Zheng, Chichao; Peng, Hu; Chen, Qiang

    2018-06-12

    The beamforming performance has a large impact on image quality in ultrasound imaging. Previously, several adaptive weighting factors including coherence factor (CF) and generalized coherence factor (GCF) have been proposed to improved image resolution and contrast. In this paper, we propose a new adaptive weighting factor for ultrasound imaging, which is called signal mean-to-standard-deviation factor (SMSF). SMSF is defined as the mean-to-standard-deviation of the aperture data and is used to weight the output of delay-and-sum (DAS) beamformer before image formation. Moreover, we develop a robust SMSF (RSMSF) by extending the SMSF to the spatial frequency domain using an altered spectrum of the aperture data. In addition, a square neighborhood average is applied on the RSMSF to offer a more smoothed square neighborhood RSMSF (SN-RSMSF) value. We compared our methods with DAS, CF, and GCF using simulated and experimental synthetic aperture data sets. The quantitative results show that SMSF results in an 82% lower full width at half-maximum (FWHM) but a 12% lower contrast ratio (CR) compared with CF. Moreover, the SN-RSMSF leads to 15% and 10% improvement, on average, in FWHM and CR compared with GCF while maintaining the speckle quality. This demonstrates that the proposed methods can effectively improve the image resolution and contrast. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    PubMed

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A statistical analysis of energy and power demand for the tractive purposes of an electric vehicle in urban traffic - an analysis of a short and long observation period

    NASA Astrophysics Data System (ADS)

    Slaski, G.; Ohde, B.

    2016-09-01

    The article presents the results of a statistical dispersion analysis of an energy and power demand for tractive purposes of a battery electric vehicle. The authors compare data distribution for different values of an average speed in two approaches, namely a short and long period of observation. The short period of observation (generally around several hundred meters) results from a previously proposed macroscopic energy consumption model based on an average speed per road section. This approach yielded high values of standard deviation and coefficient of variation (the ratio between standard deviation and the mean) around 0.7-1.2. The long period of observation (about several kilometers long) is similar in length to standardized speed cycles used in testing a vehicle energy consumption and available range. The data were analysed to determine the impact of observation length on the energy and power demand variation. The analysis was based on a simulation of electric power and energy consumption performed with speed profiles data recorded in Poznan agglomeration.

  4. Validation of a quantitative NMR method for suspected counterfeit products exemplified on determination of benzethonium chloride in grapefruit seed extracts.

    PubMed

    Bekiroglu, Somer; Myrberg, Olle; Ostman, Kristina; Ek, Marianne; Arvidsson, Torbjörn; Rundlöf, Torgny; Hakkarainen, Birgit

    2008-08-05

    A 1H-nuclear magnetic resonance (NMR) spectroscopy method for quantitative determination of benzethonium chloride (BTC) as a constituent of grapefruit seed extract was developed. The method was validated, assessing its specificity, linearity, range, and precision, as well as accuracy, limit of quantification and robustness. The method includes quantification using an internal reference standard, 1,3,5-trimethoxybenzene, and regarded as simple, rapid, and easy to implement. A commercial grapefruit seed extract was studied and the experiments were performed on spectrometers operating at two different fields, 300 and 600 MHz for proton frequencies, the former with a broad band (BB) probe and the latter equipped with both a BB probe and a CryoProbe. The concentration average for the product sample was 78.0, 77.8 and 78.4 mg/ml using the 300 BB probe, the 600MHz BB probe and CryoProbe, respectively. The standard deviation and relative standard deviation (R.S.D., in parenthesis) for the average concentrations was 0.2 (0.3%), 0.3 (0.4%) and 0.3mg/ml (0.4%), respectively.

  5. Monitoring surgical and medical outcomes: the Bernoulli cumulative SUM chart. A novel application to assess clinical interventions

    PubMed Central

    Leandro, G; Rolando, N; Gallus, G; Rolles, K; Burroughs, A

    2005-01-01

    Background: Monitoring clinical interventions is an increasing requirement in current clinical practice. The standard CUSUM (cumulative sum) charts are used for this purpose. However, they are difficult to use in terms of identifying the point at which outcomes begin to be outside recommended limits. Objective: To assess the Bernoulli CUSUM chart that permits not only a 100% inspection rate, but also the setting of average expected outcomes, maximum deviations from these, and false positive rates for the alarm signal to trigger. Methods: As a working example this study used 674 consecutive first liver transplant recipients. The expected one year mortality set at 24% from the European Liver Transplant Registry average. A standard CUSUM was compared with Bernoulli CUSUM: the control value mortality was therefore 24%, maximum accepted mortality 30%, and average number of observations to signal was 500—that is, likelihood of false positive alarm was 1:500. Results: The standard CUSUM showed an initial descending curve (nadir at patient 215) then progressively ascended indicating better performance. The Bernoulli CUSUM gave three alarm signals initially, with easily recognised breaks in the curve. There were no alarms signals after patient 143 indicating satisfactory performance within the criteria set. Conclusions: The Bernoulli CUSUM is more easily interpretable graphically and is more suitable for monitoring outcomes than the standard CUSUM chart. It only requires three parameters to be set to monitor any clinical intervention: the average expected outcome, the maximum deviation from this, and the rate of false positive alarm triggers. PMID:16210461

  6. Detection of severe storm signatures in loblolly pine using seven-year periodic standardized averages and standard deviations

    Treesearch

    Stevenson Douglas; Thomas Hennessey; Thomas Lynch; Giulia Caterina; Rodolfo Mota; Robert Heineman; Randal Holeman; Dennis Wilson; Keith Anderson

    2016-01-01

    A loblolly pine plantation near Eagletown, Oklahoma was used to test standardized tree ring widths in detecting snow and ice storms. Widths of two rings immediately following suspected storms were standardized against widths of seven rings following the storm (Stan1 and Stan2). Values of Stan1 less than -0.900 predict a severe (usually ice) storm when Stan 2 is less...

  7. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections.

    PubMed

    Zhang, You; Yin, Fang-Fang; Ren, Lei

    2015-08-01

    Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.

  8. Template CoMFA Generates Single 3D-QSAR Models that, for Twelve of Twelve Biological Targets, Predict All ChEMBL-Tabulated Affinities

    PubMed Central

    Cramer, Richard D.

    2015-01-01

    The possible applicability of the new template CoMFA methodology to the prediction of unknown biological affinities was explored. For twelve selected targets, all ChEMBL binding affinities were used as training and/or prediction sets, making these 3D-QSAR models the most structurally diverse and among the largest ever. For six of the targets, X-ray crystallographic structures provided the aligned templates required as input (BACE, cdk1, chk2, carbonic anhydrase-II, factor Xa, PTP1B). For all targets including the other six (hERG, cyp3A4 binding, endocrine receptor, COX2, D2, and GABAa), six modeling protocols applied to only three familiar ligands provided six alternate sets of aligned templates. The statistical qualities of the six or seven models thus resulting for each individual target were remarkably similar. Also, perhaps unexpectedly, the standard deviations of the errors of cross-validation predictions accompanying model derivations were indistinguishable from the standard deviations of the errors of truly prospective predictions. These standard deviations of prediction ranged from 0.70 to 1.14 log units and averaged 0.89 (8x in concentration units) over the twelve targets, representing an average reduction of almost 50% in uncertainty, compared to the null hypothesis of “predicting” an unknown affinity to be the average of known affinities. These errors of prediction are similar to those from Tanimoto coefficients of fragment occurrence frequencies, the predominant approach to side effect prediction, which template CoMFA can augment by identifying additional active structural classes, by improving Tanimoto-only predictions, by yielding quantitative predictions of potency, and by providing interpretable guidance for avoiding or enhancing any specific target response. PMID:26065424

  9. Are greenhouse gas emissions and cognitive skills related? Cross-country evidence.

    PubMed

    Omanbayev, Bekhzod; Salahodjaev, Raufhon; Lynn, Richard

    2018-01-01

    Are greenhouse gas emissions (GHG) and cognitive skills (CS) related? We attempt to answer this question by exploring this relationship, using cross-country data for 150 countries, for the period 1997-2012. After controlling for the level of economic development, quality of political regimes, population size and a number of other controls, we document that CS robustly predict GHG. In particular, when CS at a national level increase by one standard deviation, the average annual rate of air pollution changes by nearly 1.7% (slightly less than one half of a standard deviation). This significance holds for a number of robustness checks. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.

    PubMed

    Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A

    2016-01-26

    Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.

  11. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  12. Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.

    PubMed

    Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael

    2013-02-01

    The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.

  13. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993

  14. Discrete distributed strain sensing of intelligent structures

    NASA Technical Reports Server (NTRS)

    Anderson, Mark S.; Crawley, Edward F.

    1992-01-01

    Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.

  15. Thermal Vacuum Facility for Testing Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Knutson, Jeffrey R.; Sikora, Joseph G.

    2002-01-01

    A thermal vacuum facility for testing launch vehicle thermal protection systems by subjecting them to transient thermal conditions simulating re-entry aerodynamic heating is described. Re-entry heating is simulated by controlling the test specimen surface temperature and the environmental pressure in the chamber. Design requirements for simulating re-entry conditions are briefly described. A description of the thermal vacuum facility, the quartz lamp array and the control system is provided. The facility was evaluated by subjecting an 18 by 36 in. Inconel honeycomb panel to a typical re-entry pressure and surface temperature profile. For most of the test duration, the average difference between the measured and desired pressures was 1.6% of reading with a standard deviation of +/- 7.4%, while the average difference between measured and desired temperatures was 7.6% of reading with a standard deviation of +/- 6.5%. The temperature non-uniformity across the panel was 12% during the initial heating phase (t less than 500 sec.), and less than 2% during the remainder of the test.

  16. Magneto-acupuncture stimuli effects on ultraweak photon emission from hands of healthy persons.

    PubMed

    Park, Sang-Hyun; Kim, Jungdae; Koo, Tae-Hoi

    2009-03-01

    We investigated ultraweak photon emissions from the hands of 45 healthy persons before and after magneto-acupuncture stimuli. Photon emissions were measured by using two photomultiplier tubes in the spectral range of UV and visible. Several statistical quantities such as the average intensity, the standard deviation, the delta-value, and the degree of asymmetry were calculated from the measurements of photon emissions before and after the magneto-acupuncture stimuli. The distributions of the quantities from the measurements with the magneto-acupuncture stimuli were more differentiable than those of the groups without any stimuli and with the sham magnets. We also analyzed the magneto-acupuncture stimuli effects on the photon emissions through a year-long measurement for two subjects. The individualities of the subjects increased the differences of photon emissions compared to the above group study before and after magnetic stimuli. The changes on the ultraweak photon emission rates of hand for the magnet group were detected conclusively in the quantities of the averages and standard deviations.

  17. Inter-laboratory Comparison of Three Earplug Fit-test Systems

    PubMed Central

    Byrne, David C.; Murphy, William J.; Krieg, Edward F.; Ghent, Robert M.; Michael, Kevin L.; Stefanson, Earl W.; Ahroon, William A.

    2017-01-01

    The National Institute for Occupational Safety and Health (NIOSH) sponsored tests of three earplug fit-test systems (NIOSH HPD Well-Fit™, Michael & Associates FitCheck, and Honeywell Safety Products VeriPRO®). Each system was compared to laboratory-based real-ear attenuation at threshold (REAT) measurements in a sound field according to ANSI/ASA S12.6-2008 at the NIOSH, Honeywell Safety Products, and Michael & Associates testing laboratories. An identical study was conducted independently at the U.S. Army Aeromedical Research Laboratory (USAARL), which provided their data for inclusion in this report. The Howard Leight Airsoft premolded earplug was tested with twenty subjects at each of the four participating laboratories. The occluded fit of the earplug was maintained during testing with a soundfield-based laboratory REAT system as well as all three headphone-based fit-test systems. The Michael & Associates lab had highest average A-weighted attenuations and smallest standard deviations. The NIOSH lab had the lowest average attenuations and the largest standard deviations. Differences in octave-band attenuations between each fit-test system and the American National Standards Institute (ANSI) sound field method were calculated (Attenfit-test - AttenANSI). A-weighted attenuations measured with FitCheck and HPD Well-Fit systems demonstrated approximately ±2 dB agreement with the ANSI sound field method, but A-weighted attenuations measured with the VeriPRO system underestimated the ANSI laboratory attenuations. For each of the fit-test systems, the average A-weighted attenuation across the four laboratories was not significantly greater than the average of the ANSI sound field method. Standard deviations for residual attenuation differences were about ±2 dB for FitCheck and HPD Well-Fit compared to ±4 dB for VeriPRO. Individual labs exhibited a range of agreement from less than a dB to as much as 9.4 dB difference with ANSI and REAT estimates. Factors such as the experience of study participants and test administrators, and the fit-test psychometric tasks are suggested as possible contributors to the observed results. PMID:27786602

  18. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  19. CORRELATION OF FLORIDA SOIL-GAS PERMEABILITIES WITH GRAIN SIZE, MOISTURE, AND POROSITY

    EPA Science Inventory

    The report describes a new correlation or predicting gas permeabilities of undisturbed or recompacted soils from their average grain diameter (d), moisture saturation factor (m), and porosity (p). he correlation exhibits a geometric standard deviation (GSD) of only 1.27 between m...

  20. 40 CFR 60.51Da - Reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operators of affected facilities complying with the percent reduction requirement, percent reduction of the...) and inlet emission rates (ni) as applicable. (2) The standard deviation of hourly averages for outlet.... (d) In addition to the applicable requirements in § 60.7, the owner or operator of an affected...

  1. 40 CFR 60.51Da - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... actions taken. (3) For owners or operators of affected facilities complying with the percent reduction...) and inlet emission rates (ni) as applicable. (2) The standard deviation of hourly averages for outlet... system malfunction, the owner or operator of the affected facility shall submit a signed statement: (1...

  2. 40 CFR 60.51Da - Reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operators of affected facilities complying with the percent reduction requirement, percent reduction of the...) and inlet emission rates (ni) as applicable. (2) The standard deviation of hourly averages for outlet.... (d) In addition to the applicable requirements in § 60.7, the owner or operator of an affected...

  3. 40 CFR 60.51Da - Reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operators of affected facilities complying with the percent reduction requirement, percent reduction of the...) and inlet emission rates (ni) as applicable. (2) The standard deviation of hourly averages for outlet.... (d) In addition to the applicable requirements in § 60.7, the owner or operator of an affected...

  4. Children's Use of the Prosodic Characteristics of Infant-Directed Speech.

    ERIC Educational Resources Information Center

    Weppelman, Tammy L.; Bostow, Angela; Schiffer, Ryan; Elbert-Perez, Evelyn; Newman, Rochelle S.

    2003-01-01

    Examined whether young children (4 years of age) show prosodic changes when speaking to infants. Measured children's word duration in infant-directed speech compared to adult-directed speech, examined amplitude variability, and examined both average fundamental frequency and fundamental frequency standard deviation. Results indicate that…

  5. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    NASA Technical Reports Server (NTRS)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  6. Assessment issues in the testing of children at school entry.

    PubMed

    Rock, Donald A; Stenner, A Jackson

    2005-01-01

    The authors introduce readers to the research documenting racial and ethnic gaps in school readiness. They describe the key tests, including the Peabody Picture Vocabulary Test (PPVT), the Early Childhood Longitudinal Study (ECLS), and several intelligence tests, and describe how they have been administered to several important national samples of children. Next, the authors review the different estimates of the gaps and discuss how to interpret these differences. In interpreting test results, researchers use the statistical term "standard deviation" to compare scores across the tests. On average, the tests find a gap of about 1 standard deviation. The ECLS-K estimate is the lowest, about half a standard deviation. The PPVT estimate is the highest, sometimes more than 1 standard deviation. When researchers adjust those gaps statistically to take into account different outside factors that might affect children's test scores, such as family income or home environment, the gap narrows but does not disappear. Why such different estimates of the gap? The authors consider explanations such as differences in the samples, racial or ethnic bias in the tests, and whether the tests reflect different aspects of school "readiness," and conclude that none is likely to explain the varying estimates. Another possible explanation is the Spearman Hypothesis-that all tests are imperfect measures of a general ability construct, g; the more highly a given test correlates with g, the larger the gap will be. But the Spearman Hypothesis, too, leaves questions to be investigated. A gap of 1 standard deviation may not seem large, but the authors show clearly how it results in striking disparities in the performance of black and white students and why it should be of serious concern to policymakers.

  7. Outcome of facial physiotherapy in patients with prolonged idiopathic facial palsy.

    PubMed

    Watson, G J; Glover, S; Allen, S; Irving, R M

    2015-04-01

    This study investigated whether patients who remain symptomatic more than a year following idiopathic facial paralysis gain benefit from tailored facial physiotherapy. A two-year retrospective review was conducted of all symptomatic patients. Data collected included: age, gender, duration of symptoms, Sunnybrook facial grading system scores pre-treatment and at last visit, and duration of treatment. The study comprised 22 patients (with a mean age of 50.5 years (range, 22-75 years)) who had been symptomatic for more than a year following idiopathic facial paralysis. The mean duration of symptoms was 45 months (range, 12-240 months). The mean duration of follow up was 10.4 months (range, 2-36 months). Prior to treatment, the mean Sunnybrook facial grading system score was 59 (standard deviation = 3.5); this had increased to 83 (standard deviation = 2.7) at the last visit, with an average improvement in score of 23 (standard deviation = 2.9). This increase was significant (p < 0.001). Tailored facial therapy can improve facial grading scores in patients who remain symptomatic for prolonged periods.

  8. Evaluation of scaling invariance embedded in short time series.

    PubMed

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  9. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  10. Analytical probabilistic proton dose calculation and range uncertainties

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  11. Tendon transfer fixation: comparing a tendon to tendon technique vs. bioabsorbable interference-fit screw fixation.

    PubMed

    Sabonghy, Eric Peter; Wood, Robert Michael; Ambrose, Catherine Glauber; McGarvey, William Christopher; Clanton, Thomas Oscar

    2003-03-01

    Tendon transfer techniques in the foot and ankle are used for tendon ruptures, deformities, and instabilities. This fresh cadaver study compares the tendon fixation strength in 10 paired specimens by performing a tendon to tendon fixation technique or using 7 x 20-25 mm bioabsorbable interference-fit screw tendon fixation technique. Load at failure of the tendon to tendon fixation method averaged 279N (Standard Deviation 81N) and the bioabsorbable screw 148N (Standard Deviation 72N) [p = 0.0008]. Bioabsorbable interference-fit screws in these specimens show decreased fixation strength relative to the traditional fixation technique. However, the mean bioabsorbable screw fixation strength of 148N provides physiologic strength at the tendon-bone interface.

  12. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  13. Plant functional traits improve diversity-based predictions of temporal stability of grassland productivity

    USDA-ARS?s Scientific Manuscript database

    Aboveground net primary productivity (ANPP) varies in response to temporal fluctuations in weather. Temporal stability (mean/standard deviation) of community ANPP may be increased, on average, by increasing plant species richness, but stability also may differ widely at a given richness level imply...

  14. Comparison of spectral estimators for characterizing fractionated atrial electrograms

    PubMed Central

    2013-01-01

    Background Complex fractionated atrial electrograms (CFAE) acquired during atrial fibrillation (AF) are commonly assessed using the discrete Fourier transform (DFT), but this can lead to inaccuracy. In this study, spectral estimators derived by averaging the autocorrelation function at lags were compared to the DFT. Method Bipolar CFAE of at least 16 s duration were obtained from pulmonary vein ostia and left atrial free wall sites (9 paroxysmal and 10 persistent AF patients). Power spectra were computed using the DFT and three other methods: 1. a novel spectral estimator based on signal averaging (NSE), 2. the NSE with harmonic removal (NSH), and 3. the autocorrelation function average at lags (AFA). Three spectral parameters were calculated: 1. the largest fundamental spectral peak, known as the dominant frequency (DF), 2. the DF amplitude (DA), and 3. the mean spectral profile (MP), which quantifies noise floor level. For each spectral estimator and parameter, the significance of the difference between paroxysmal and persistent AF was determined. Results For all estimators, mean DA and mean DF values were higher in persistent AF, while the mean MP value was higher in paroxysmal AF. The differences in means between paroxysmals and persistents were highly significant for 3/3 NSE and NSH measurements and for 2/3 DFT and AFA measurements (p<0.001). For all estimators, the standard deviation in DA and MP values were higher in persistent AF, while the standard deviation in DF value was higher in paroxysmal AF. Differences in standard deviations between paroxysmals and persistents were highly significant in 2/3 NSE and NSH measurements, in 1/3 AFA measurements, and in 0/3 DFT measurements. Conclusions Measurements made from all four spectral estimators were in agreement as to whether the means and standard deviations in three spectral parameters were greater in CFAEs acquired from paroxysmal or in persistent AF patients. Since the measurements were consistent, use of two or more of these estimators for power spectral analysis can be assistive to evaluate CFAE more objectively and accurately, which may lead to improved clinical outcome. Since the most significant differences overall were achieved using the NSE and NSH estimators, parameters measured from their spectra will likely be the most useful for detecting and discerning electrophysiologic differences in the AF substrate based upon frequency analysis of CFAE. PMID:23855345

  15. On-site audits to investigate the quality of radiation physics of radiation therapy institutions in the Republic of Korea.

    PubMed

    Park, Jong Min; Park, So-Yeon; Chun, Minsoo; Kim, Sang-Tae

    2017-08-01

    To investigate and improve the domestic standard of radiation therapy in the Republic of Korea. On-site audits were performed for 13 institutions in the Republic of Korea. Six items were investigated by on-site visits of each radiation therapy institution, including collimator, gantry, and couch rotation isocenter check; coincidence between light and radiation fields; photon beam flatness and symmetry; electron beam flatness and symmetry; physical wedge transmission factors; and photon beam and electron beam outputs. The average deviations of mechanical collimator, gantry, and couch rotation isocenter were less than 1mm. Those of radiation isocenter were also less than 1mm. The average difference between light and radiation fields was 0.9±0.6mm for the field size of 20cm×20cm. The average values of flatness and symmetry of the photon beams were 2.9%±0.6% and 1.1%±0.7%, respectively. Those of electron beams were 2.5%±0.7% and 0.6%±1.0%, respectively. Every institutions showed wedge transmission factor deviations less than 2% except one institution. The output deviations of both photon and electron beams were less than ±3% for every institution. Through the on-site audit program, we could effectively detect an inappropriately operating linacs and provide some recommendations. The standard of radiation therapy in Korea is expected to improve through such on-site audits. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. In-depth analysis and discussions of water absorption-typed high power laser calorimeter

    NASA Astrophysics Data System (ADS)

    Wei, Ji Feng

    2017-02-01

    In high-power and high-energy laser measurement, the absorber materials can be easily destroyed under long-term direct laser irradiation. In order to improve the calorimeter's measuring capacity, a measuring system directly using water flow as the absorber medium was built. The system's basic principles and the designing parameters of major parts were elaborated. The system's measuring capacity, the laser working modes, and the effects of major parameters were analyzed deeply. Moreover, the factors that may affect the accuracy of measurement were analyzed and discussed. The specific control measures and methods were elaborated. The self-calibration and normal calibration experiments show that this calorimeter has very high accuracy. In electrical calibration, the average correction coefficient is only 1.015, with standard deviation of only 0.5%. In calibration experiments, the standard deviation relative to a middle-power standard calorimeter is only 1.9%.

  17. Retinal nerve fiber layer thickness measured with optical coherence tomography is related to visual function in glaucomatous eyes.

    PubMed

    El Beltagi, Tarek A; Bowd, Christopher; Boden, Catherine; Amini, Payam; Sample, Pamela A; Zangwill, Linda M; Weinreb, Robert N

    2003-11-01

    To determine the relationship between areas of glaucomatous retinal nerve fiber layer thinning identified by optical coherence tomography and areas of decreased visual field sensitivity identified by standard automated perimetry in glaucomatous eyes. Retrospective observational case series. Forty-three patients with glaucomatous optic neuropathy identified by optic disc stereo photographs and standard automated perimetry mean deviations >-8 dB were included. Participants were imaged with optical coherence tomography within 6 months of reliable standard automated perimetry testing. The location and number of optical coherence tomography clock hour retinal nerve fiber layer thickness measures outside normal limits were compared with the location and number of standard automated perimetry visual field zones outside normal limits. Further, the relationship between the deviation from normal optical coherence tomography-measured retinal nerve fiber layer thickness at each clock hour and the average pattern deviation in each visual field zone was examined by using linear regression (R(2)). The retinal nerve fiber layer areas most frequently outside normal limits were the inferior and inferior temporal regions. The least sensitive visual field zones were in the superior hemifield. Linear regression results (R(2)) showed that deviation from the normal retinal nerve fiber layer thickness at optical coherence tomography clock hour positions 6 o'clock, 7 o'clock, and 8 o'clock (inferior and inferior temporal) was best correlated with standard automated perimetry pattern deviation in visual field zones corresponding to the superior arcuate and nasal step regions (R(2) range, 0.34-0.57). These associations were much stronger than those between clock hour position 6 o'clock and the visual field zone corresponding to the inferior nasal step region (R(2) = 0.01). Localized retinal nerve fiber layer thinning, measured by optical coherence tomography, is topographically related to decreased localized standard automated perimetry sensitivity in glaucoma patients.

  18. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  19. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  20. Assessment of Conventional Teaching Procedures: Implications for Gifted Learners

    ERIC Educational Resources Information Center

    Alenizi, Mogbel Aid K.

    2016-01-01

    The present research aims to assess the conventional teaching procedures in the development of mathematical skills of the students with learning difficulties. The study group was made up of all the children with academic learning disorders in KSA. The research questions have been scrutinized from the averages and the standard deviation of the…

  1. The Treatment Effect of Grade Repetitions

    ERIC Educational Resources Information Center

    Mahjoub, Mohamed-Badrane

    2017-01-01

    This paper estimates the treatment effect of grade repetitions in French junior high schools, using a value-added test score as outcome and quarter of birth as instrument. With linear two-stage least squares, local average treatment effect is estimated at around 1.6 times the standard deviation of the achievement gain. With non-linear…

  2. New Evidence on the Relationship Between Climate and Conflict

    NASA Astrophysics Data System (ADS)

    Burke, M.

    2015-12-01

    We synthesize a large new body of research on the relationship between climate and conflict. We consider many types of human conflict, ranging from interpersonal conflict -- domestic violence, road rage, assault, murder, and rape -- to intergroup conflict -- riots, coups, ethnic violence, land invasions, gang violence, and civil war. After harmonizing statistical specifications and standardizing estimated effect sizes within each conflict category, we implement a meta-analysis that allows us to estimate the mean effect of climate variation on conflict outcomes as well as quantify the degree of variability in this effect size across studies. Looking across more than 50 studies, we find that deviations from moderate temperatures and precipitation patterns systematically increase the risk of conflict, often substantially, with average effects that are highly statistically significant. We find that contemporaneous temperature has the largest average effect by far, with each 1 standard deviation increase toward warmer temperatures increasing the frequency of contemporaneous interpersonal conflict by 2% and of intergroup conflict by more than 10%. We also quantify substantial heterogeneity in these effect estimates across settings.

  3. Linking Initial Microstructure to ORR Related Property Degradation in SOFC Cathode: A Phase Field Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Y.; Cheng, T. -L.; Wen, Y. H.

    Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less

  4. Linking Initial Microstructure to ORR Related Property Degradation in SOFC Cathode: A Phase Field Simulation

    DOE PAGES

    Lei, Y.; Cheng, T. -L.; Wen, Y. H.

    2017-07-05

    Microstructure evolution driven by thermal coarsening is an important factor for the loss of oxygen reduction reaction rates in SOFC cathode. In this work, the effect of an initial microstructure on the microstructure evolution in SOFC cathode is investigated using a recently developed phase field model. Specifically, we tune the phase fraction, the average grain size, the standard deviation of the grain size and the grain shape in the initial microstructure, and explore their effect on the evolution of the grain size, the density of triple phase boundary, the specific surface area and the effective conductivity in LSM-YSZ cathodes. Itmore » is found that the degradation rate of TPB density and SSA of LSM is lower with less LSM phase fraction (with constant porosity assumed) and greater average grain size, while the degradation rate of effective conductivity can also be tuned by adjusting the standard deviation of grain size distribution and grain aspect ratio. The implication of this study on the designing of an optimal initial microstructure of SOFC cathodes is discussed.« less

  5. Groundwater-surface water interactions across scales in a boreal landscape investigated using a numerical modelling approach

    NASA Astrophysics Data System (ADS)

    Jutebring Sterte, Elin; Johansson, Emma; Sjöberg, Ylva; Huseby Karlsen, Reinert; Laudon, Hjalmar

    2018-05-01

    Groundwater and surface-water interactions are regulated by catchment characteristics and complex inter- and intra-annual variations in climatic conditions that are not yet fully understood. Our objective was to investigate the influence of catchment characteristics and freeze-thaw processes on surface and groundwater interactions in a boreal landscape, the Krycklan catchment in Sweden. We used a numerical modelling approach and sub-catchment evaluation method to identify and evaluate fundamental catchment characteristics and processes. The model reproduced observed stream discharge patterns of the 14 sub-catchments and the dynamics of the 15 groundwater wells with an average accumulated discharge error of 1% (15% standard deviation) and an average groundwater-level mean error of 0.1 m (0.23 m standard deviation). We show how peatland characteristics dampen the effect of intense rain, and how soil freeze-thaw processes regulate surface and groundwater partitioning during snowmelt. With these results, we demonstrate the importance of defining, understanding and quantifying the role of landscape heterogeneity and sub-catchment characteristics for accurately representing catchment hydrological functioning.

  6. Potentiometric sensors for the selective determination of sulbutiamine.

    PubMed

    Ahmed, M A; Elbeshlawy, M M

    1999-11-01

    Five novel polyvinyl chloride (PVC) matrix membrane sensors for the selective determination of sulbutiamine (SBA) cation are described. These sensors are based on molybdate, tetraphenylborate, reineckate, phosphotun gestate and phosphomolybdate, as possible ion-pairing agents. These sensors display rapid near-Nernstian stable response over a relatively wide concentration range 1x10(-2)-1x10(-6) M of sulbutiamine, with calibration slopes 28 32.6 mV decade(-1) over a reasonable pH range 2-6. The proposed sensors proved to have a good selectivity for SBA over some inorganic and organic cations. The five potentiometric sensors were applied successfully in the determination of SBA in a pharmaceutical preparation (arcalion-200) using both direct potentiometry and potentiometric titration. Direct potentiometric determination of microgram quantities of SBA gave average recoveries of 99.4 and 99.3 with mean standard deviation of 0.7 and 0.3 for pure SBA and arcalion-200 formulation respectively. Potentiometric titration of milligram quantities of SBA gave average recoveries of 99.3 and 98.7% with mean standard deviation of 0.7 and 1.2 for pure SBA and arcalion-200 formulation, respectively.

  7. Apparent diffusion coefficient histogram metrics correlate with survival in diffuse intrinsic pontine glioma: a report from the Pediatric Brain Tumor Consortium

    PubMed Central

    Poussaint, Tina Young; Vajapeyam, Sridhar; Ricci, Kelsey I.; Panigrahy, Ashok; Kocak, Mehmet; Kun, Larry E.; Boyett, James M.; Pollack, Ian F.; Fouladi, Maryam

    2016-01-01

    Background Diffuse intrinsic pontine glioma (DIPG) is associated with poor survival regardless of therapy. We used volumetric apparent diffusion coefficient (ADC) histogram metrics to determine associations with progression-free survival (PFS) and overall survival (OS) at baseline and after radiation therapy (RT). Methods Baseline and post-RT quantitative ADC histograms were generated from fluid-attenuated inversion recovery (FLAIR) images and enhancement regions of interest. Metrics assessed included number of peaks (ie, unimodal or bimodal), mean and median ADC, standard deviation, mode, skewness, and kurtosis. Results Based on FLAIR images, the majority of tumors had unimodal peaks with significantly shorter average survival. Pre-RT FLAIR mean, mode, and median values were significantly associated with decreased risk of progression; higher pre-RT ADC values had longer PFS on average. Pre-RT FLAIR skewness and standard deviation were significantly associated with increased risk of progression; higher pre-RT FLAIR skewness and standard deviation had shorter PFS. Nonenhancing tumors at baseline showed higher ADC FLAIR mean values, lower kurtosis, and higher PFS. For enhancing tumors at baseline, bimodal enhancement histograms had much worse PFS and OS than unimodal cases and significantly lower mean peak values. Enhancement in tumors only after RT led to significantly shorter PFS and OS than in patients with baseline or no baseline enhancement. Conclusions ADC histogram metrics in DIPG demonstrate significant correlations between diffusion metrics and survival, with lower diffusion values (increased cellularity), increased skewness, and enhancement associated with shorter survival, requiring future investigations in large DIPG clinical trials. PMID:26487690

  8. Mass balance, meteorological, ice motion, surface altitude, runoff, and ice thickness data at Gulkana Glacier, Alaska, 1995 balance year

    USGS Publications Warehouse

    March, Rod S.

    2000-01-01

    The 1995 measured winter snow, maximum winter snow, net, and annual balances in the Gulkana Glacier basin were evaluated on the basis of meteorological, hydrological, and glaciological data obtained in the basin. Averaged over the glacier, the measured winter snow balance was 0.94 meter on April 19, 1995, 0.6 standard deviation below the long-term average; the maximum winter snow balance, 0.94 meter, was reached on April 25, 1995; the net balance (from September 18, 1994 to August 29, 1995) was -0.70 meter, 0.76 standard deviation below the long-term average. The annual balance (October 1, 1994, to September 30, 1995) was -0.86 meter. Ice-surface motion and altitude changes measured at three index sites document seasonal ice speed and glacier-thickness changes. Annual stream runoff was 2.05 meters averaged over the basin, approximately equal to the long-term average. The 1976 ice-thickness data are reported from a single site near the highest measurement site (180 meters thick) and from two glacier cross profiles near the mid-glacier (270 meters thick on centerline) and low glacier (150 meters thick on centerline) measurement sites. A new area-altitude distribution determined from 1993 photogrammetry is reported. Area-averaged balances are reported from both the 1967 and 1993 area-altitude distribution so the reader may directly see the effect of the update. Briefly, loss of ablation area between 1967 and 1993 results in a larger weighting being applied to data from the upper glacier site and hence, increases calculated area-averaged balances. The balance increase is of the order of 15 percent for net balance.

  9. Particle Image Velocimetry During Injection Molding

    NASA Astrophysics Data System (ADS)

    Bress, Thomas; Dowling, David

    2012-11-01

    Injection molding involves the unsteady non-isothermal flow of a non-Newtonian polymer melt. An optical-access mold has been used to perform particle image velocimetry (PIV) on molten polystyrene during injection molding. Velocimetry data of the mold-filling flow will be presented. Statistical assessments of the velocimetry data and scaled residuals of the continuity equation suggest that PIV can be conducted in molten plastics with an uncertainty of +/-2 percent. Simulations are often used to model polymer flow during injection molding to design molds and select processing parameters but it is difficult to determine the accuracy of these simulations due to a lack of in-mold velocimetry and melt-front progression data. Moldflow was used to simulate the filling of the optical-access mold, and these simulated results are compared to the appropriately-averaged time-varying velocity field measurements. Simulated results for melt-front progression are also compared with the experimentally observed flow fronts. The ratio of the experimentally measured average velocity magnitudes to the simulation magnitudes was found on average to be 0.99 with a standard deviation of 0.25, and the difference in velocity orientations was found to be 0.9 degree with a standard deviation of 3.2 degrees. formerly at the University of Michigan.

  10. Four-month Moon and Mars crew water utilization study conducted at the Flashline Mars Arctic Research Station, Devon Island, Nunavut

    NASA Astrophysics Data System (ADS)

    Bamsey, M.; Berinstain, A.; Auclair, S.; Battler, M.; Binsted, K.; Bywaters, K.; Harris, J.; Kobrick, R.; McKay, C.

    2009-04-01

    A categorized water usage study was undertaken at the Flashline Mars Arctic Research Station on Devon Island, Nunavut in the High Canadian Arctic. This study was conducted as part of a long duration four-month Mars mission simulation during the summer of 2007. The study determined that the crew of seven averaged 82.07 L/day over the expedition (standard deviation 22.58 L/day). The study also incorporated a Mars Time Study phase which determined that an average of 12.12 L/sol of water was required for each crewmember. Drinking, food preparation, hand/face, oral, dish wash, clothes wash, shower, shaving, cleaning, engineering, science, plant growth and medical water were each individually monitored throughout the detailed study phases. It was determined that implementing the monitoring program itself resulted in an approximate water savings of 1.5 L/day per crewmember. The seven person crew averaged 202 distinct water draws a day (standard deviation 34) with high water use periods focusing around meal times. No statistically significant correlation was established between total water use and EVA or exercise duration. Study results suggest that current crew water utilization estimates for long duration planetary surface stays are more than two times greater than that required.

  11. An experimental investigation of gas fuel injection with X-ray radiography

    DOE PAGES

    Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.; ...

    2017-04-21

    In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less

  12. Sterile Basics of Compounding: Relationship Between Syringe Size and Dosing Accuracy.

    PubMed

    Kosinski, Tracy M; Brown, Michael C; Zavala, Pedro J

    2018-01-01

    The purpose of this study was to investigate the accuracy and reproducibility of a 2-mL volume injection using a 3-mL and 10-mL syringe with pharmacy student compounders. An exercise was designed to assess each student's accuracy in compounding a sterile preparation with the correct 4-mg strength using a 3-mL and 10-mL syringe. The average ondansetron dose when compounded with the 3-mL syringe was 4.03 mg (standard deviation ± 0.45 mg), which was not statistically significantly different than the intended 4-mg desired dose (P=0.497). The average ondansetron dose when compounded with the 10-mL syringe was 4.18 mg (standard deviation + 0.68 mg), which was statistically significantly different than the intended 4-mg desired dose (P=0.002). Additionally, there also was a statistically significant difference in the average ondansetron dose compounded using a 3-mL syringe (4.03 mg) and a 10-mL syringe (4.18 mg) (P=0.027). The accuracy and reproducibility of the 2-mL desired dose volume decreased as the compounding syringe size increased from 3 mL to 10 mL. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  13. Enhanced Cumulative Sum Charts for Monitoring Process Dispersion

    PubMed Central

    Abujiya, Mu’azu Ramat; Riaz, Muhammad; Lee, Muhammad Hisyam

    2015-01-01

    The cumulative sum (CUSUM) control chart is widely used in industry for the detection of small and moderate shifts in process location and dispersion. For efficient monitoring of process variability, we present several CUSUM control charts for monitoring changes in standard deviation of a normal process. The newly developed control charts based on well-structured sampling techniques - extreme ranked set sampling, extreme double ranked set sampling and double extreme ranked set sampling, have significantly enhanced CUSUM chart ability to detect a wide range of shifts in process variability. The relative performances of the proposed CUSUM scale charts are evaluated in terms of the average run length (ARL) and standard deviation of run length, for point shift in variability. Moreover, for overall performance, we implore the use of the average ratio ARL and average extra quadratic loss. A comparison of the proposed CUSUM control charts with the classical CUSUM R chart, the classical CUSUM S chart, the fast initial response (FIR) CUSUM R chart, the FIR CUSUM S chart, the ranked set sampling (RSS) based CUSUM R chart and the RSS based CUSUM S chart, among others, are presented. An illustrative example using real dataset is given to demonstrate the practicability of the application of the proposed schemes. PMID:25901356

  14. An experimental investigation of gas fuel injection with X-ray radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swantek, Andrew B.; Duke, D. J.; Kastengren, A. L.

    In this paper, an outward-opening compressed natural gas, direct injection fuel injector has been studied with single-shot x-ray radiography. Three dimensional simulations have also been performed to compliment the x-ray data. Argon was used as a surrogate gas for experimental and safety reasons. This technique allows the acquisition of a quantitative mapping of the ensemble-average and standard deviation of the projected density throughout the injection event. Two dimensional, ensemble average and standard deviation data are presented to investigate the quasi-steady-state behavior of the jet. Upstream of the stagnation zone, minimal shot-to-shot variation is observed. Downstream of the stagnation zone, bulkmore » mixing is observed as the jet transitions to a subsonic turbulent jet. From the time averaged data, individual slices at all downstream locations are extracted and an Abel inversion was performed to compute the radial density distribution, which was interpolated to create three dimensional visualizations. The Abel reconstructions reveal that upstream of the stagnation zone, the gas forms an annulus with high argon density and large density gradients. Inside this annulus, a recirculation region with low argon density exists. Downstream, the jet transitions to a fully turbulent jet with Gaussian argon density distributions. This experimental data is intended to serve as a quantitative benchmark for simulations.« less

  15. Measuring the rate of change of voice fundamental frequency in fluent speech during mental depression.

    PubMed

    Nilsonne, A; Sundberg, J; Ternström, S; Askenfelt, A

    1988-02-01

    A method of measuring the rate of change of fundamental frequency has been developed in an effort to find acoustic voice parameters that could be useful in psychiatric research. A minicomputer program was used to extract seven parameters from the fundamental frequency contour of tape-recorded speech samples: (1) the average rate of change of the fundamental frequency and (2) its standard deviation, (3) the absolute rate of fundamental frequency change, (4) the total reading time, (5) the percent pause time of the total reading time, (6) the mean, and (7) the standard deviation of the fundamental frequency distribution. The method is demonstrated on (a) a material consisting of synthetic speech and (b) voice recordings of depressed patients who were examined during depression and after improvement.

  16. [Detection of methyl carbamate and ethyl carbamate in yellow rice wine by gas chromatography-mass spectrometry].

    PubMed

    Wu, Ping-gu; Ma, Bing-jie; Wang, Li-yuan; Shen, Xiang-hong; Zhang, Jing; Tan, Ying; Jiang, Wei

    2013-11-01

    To establish the method of simultaneous determination of methylcarbamate (MC) and ethylcarbamate (EC) in yellow rice wine by gas chromatography-mass spectrometry (GC/MS). MC and EC in yellow rice wine were derived by 9-xanthydrol, and then the derivants were detected by GC/MS; and quantitatively analyzed by D5-EC isotope internal standard method. The linearity of MC and EC ranged from 2.0 µg/L to 400.0 µg/L, with correlation coefficients at 0.998 and 0.999, respectively. The limits of quantitation (LOQ) and detection (LOD) were 0.67 and 2.0 µg/kg. When MC and EC were added in yellow rice wine at the range of 2.0-300.0 µg/kg, the intraday average recovery rate was 78.8%-102.3%, relative standard deviation was 3.2%-11.6%; interday average recovery rate was 75.4%-101.3%, relative standard deviation was 3.8%-13.4%. 20 samples of yellow rice wine from supermarket were detected using this method, the contents of MC were in the range of ND (no detected) to 1.2 µg/kg, the detection rate was 6% (3/20), the contents of EC in the range of 18.6 µg/kg to 432.3 µg/kg, with the average level at 135.2 µg/kg. The method is simple, rapid and useful for simultaneous determination of MC and EC in yellow rice wine.

  17. Reliability and Repetition Effect of the Center of Pressure and Kinematics Parameters That Characterize Trunk Postural Control During Unstable Sitting Test.

    PubMed

    Barbado, David; Moreside, Janice; Vera-Garcia, Francisco J

    2017-03-01

    Although unstable seat methodology has been used to assess trunk postural control, the reliability of the variables that characterize it remains unclear. To analyze reliability and learning effect of center of pressure (COP) and kinematic parameters that characterize trunk postural control performance in unstable seating. The relationships between kinematic and COP parameters also were explored. Test-retest reliability design. Biomechanics laboratory setting. Twenty-three healthy male subjects. Participants volunteered to perform 3 sessions at 1-week intervals, each consisting of five 70-second balancing trials. A force platform and a motion capture system were used to measure COP and pelvis, thorax, and spine displacements. Reliability was assessed through standard error of measurement (SEM) and intraclass correlation coefficients (ICC 2,1 ) using 3 methods: (1) comparing the last trial score of each day; (2) comparing the best trial score of each day; and (3) calculating the average of the three last trial scores of each day. Standard deviation and mean velocity were calculated to assess balance performance. Although analyses of variance showed some differences in balance performance between days, these differences were not significant between days 2 and 3. Best result and average methods showed the greatest reliability. Mean velocity of the COP showed high reliability (0.71 < ICC < 0.86; 10.3 < SEM < 13.0), whereas standard deviation only showed a low to moderate reliability (0.37 < ICC < 0.61; 14.5 < SEM < 23.0). Regarding the kinematic variables, only pelvis displacement mean velocity achieved a high reliability using the average method (0.62 < ICC < 0.83; 18.8 < SEM < 23.1). Correlations between COP and kinematics were high only for mean velocity (0.45

  18. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    PubMed

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  19. Diagnostic Consistency and Relation Between Optical Coherence Tomography and Standard Automated Perimetry in Primary Open-Angle Glaucoma.

    PubMed

    Toprak, Ibrahim; Yaylalı, Volkan; Yildirim, Cem

    2017-01-01

    To assess diagnostic consistency and relation between spectral-domain optical coherence tomography (SD-OCT) and standard automated perimetry (SAP) in patients with primary open-angle glaucoma (POAG). This retrospective study comprised 51 eyes of 51 patients with a confirmed diagnosis of POAG. The qualitative and quantitative SD-OCT parameters (retinal nerve fiber layer thicknesses [RNFL; average, superior, inferior, nasal and temporal], RNFL symmetry, rim area, disc area, average and vertical cup/disc [C/D] ratio and cup volume) were compared with parameters of SAP (mean deviation, pattern standard deviation, visual field index, and glaucoma hemifield test reports). Fifty-one eyes of 51 patients with POAG were recruited. Twenty-nine eyes (56.9%) had consistent RNFL and visual field (VF) damage. However, nine patients (17.6%) showed isolated RNFL damage on SD-OCT and 13 patients (25.5%) had abnormal VF test with normal RNFL. In patients with VF defect, age, average C/D ratio, vertical C/D ratio, and cup volume were significantly higher and rim area was lower when compared to those of the patients with normal VF. In addition to these parameters, worsening in average, superior, inferior, and temporal RNFL thicknesses and RNFL symmetry was significantly associated with consistent SD-OCT and SAP outcomes. In routine practice, patients with POAG can be manifested with inconsistent reports between SD-OCT and SAP. An older age, higher C/D ratio, larger cup volume, and lower rim area on SD-OCT appears to be associated with detectable VF damage. Moreover, additional worsening in RNFL parameters might reinforce diagnostic consistency between SD-OCT and SAP.

  20. A Meta-Analytic Study of Couple Interventions during the Transition to Parenthood

    ERIC Educational Resources Information Center

    Pinquart, Martin; Teubert, Daniela

    2010-01-01

    The present meta-analysis integrates results of 21 controlled couple-focused interventions with expectant and new parents. The interventions had, on average, small effects on couple communication (d = 0.28 standard deviation units) and psychological well-being (d = 0.21), as well as very small effects on couple adjustment (d = 0.09). Stronger…

  1. A FORMULA FOR HUMAN PAROTID FLUID COLLECTED WITHOUT EXOGENEOUS STIMULATION.

    DTIC Science & Technology

    Parotid fluid was collected from 4,589 systemically healthy males between 17 and 22 years of age. Collection devices were placed with an absolute...secretion of the parotid gland. For all 4,589 subjects from the 8 experiments the mean rate of flow was 0.040 ml./minute with an average standard deviation of

  2. Computerized Silent Reading Rate and Strategy Instruction for Fourth Graders at Risk in Silent Reading Rate

    ERIC Educational Resources Information Center

    Niedo, Jasmin; Lee, Yen-Ling; Breznitz, Zvia; Berninger, Virginia W.

    2014-01-01

    Fourth graders whose silent word reading and/or sentence reading rate was, on average, two-thirds standard deviation below their oral reading of real and pseudowords and reading comprehension accuracy were randomly assigned to treatment ("n" = 7) or wait-listed ("n" = 7) control groups. Following nine sessions combining…

  3. Study of surge current effects on solid tantalum capacitors

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Results are presented of a 2,000 hour cycled life test program conducted to determine the effect of short term surge current screening on approximately 47 micron f/volt solid tantalum capacitors. The format provides average values and standard deviations of the parameters, capacitance, dissipation factor, and equivalent series resistance at 120 Hz, 1KHz, abd 40 KHz.

  4. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  5. Sensitivity of species to chemicals: dose-response characteristics for various test types (LC(50), LR(50) and LD(50)) and modes of action.

    PubMed

    Hendriks, A Jan; Awkerman, Jill A; de Zwart, Dick; Huijbregts, Mark A J

    2013-11-01

    While variable sensitivity of model species to common toxicants has been addressed in previous studies, a systematic analysis of inter-species variability for different test types, modes of action and species is as of yet lacking. Hence, the aim of the present study was to identify similarities and differences in contaminant levels affecting cold-blooded and warm-blooded species administered via different routes. To that end, data on lethal water concentrations LC50, tissue residues LR50 and oral doses LD50 were collected from databases, each representing the largest of its kind. LC50 data were multiplied by a bioconcentration factor (BCF) to convert them to internal concentrations that allow for comparison among species. For each endpoint data set, we calculated the mean and standard deviation of species' lethal level per compound. Next, the means and standard deviations were averaged by mode of action. Both the means and standard deviations calculated depended on the number of species tested, which is at odds with quality standard setting procedures. Means calculated from (BCF) LC50, LR50 and LD50 were largely similar, suggesting that different administration routes roughly yield similar internal levels. Levels for compounds interfering biochemically with elementary life processes were about one order of magnitude below that of narcotics disturbing membranes, and neurotoxic pesticides and dioxins induced death in even lower amounts. Standard deviations for LD50 data were similar across modes of action, while variability of LC50 values was lower for narcotics than for substances with a specific mode of action. The study indicates several directions to go for efficient use of available data in risk assessment and reduction of species testing. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  7. Resistance Training Increases the Variability of Strength Test Scores

    DTIC Science & Technology

    2009-06-08

    standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard

  8. Design, development and clinical validation of computer-aided surgical simulation system for streamlined orthognathic surgical planning.

    PubMed

    Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M; Teichgraeber, John F; Gateno, Jaime; Xia, James J

    2017-12-01

    There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities.

  9. Design, development and clinical validation of computer-aided surgical simulation system for streamlined orthognathic surgical planning

    PubMed Central

    Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M.; Teichgraeber, John F.; Gateno, Jaime

    2017-01-01

    Purpose There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. Methods The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. Result When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. Conclusion We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities. PMID:28432489

  10. The Relation of White-on-White Standard Automated Perimetry, Short Wavelength Perimetry, and Optic Coherence Tomography Parameters in Ocular Hypertension.

    PubMed

    Başkan, Ceyda; Köz, Özlem G; Duman, Rahmi; Gökçe, Sabite E; Yarangümeli, Ahmet A; Kural, Gülcan

    2016-12-01

    The purpose of this study is to examine the demographics, clinical properties, and the relation between white-on-white standard automated perimetry (SAP), short wavelength automated perimetry (SWAP), and optical coherence tomographic (OCT) parameters of patients with ocular hypertension. Sixty-one eyes of 61 patients diagnosed with ocular hypertension in the Ankara Numune Education and Research Hospital ophthalmology unit between January 2010 and January 2011 were included in this study. All patients underwent SAP and SWAP tests with the Humphrey visual field analyser using the 30.2 full-threshold test. Retinal nerve fiber layers (RNFL) and optic nerve heads of patients were evaluated with Stratus OCT. Positive correlation was detected between SAP pattern standard deviation value and average intraocular pressure (P=0.017), maximum intraocular pressure (P=0.009), and vertical cup to disc (C/D) ratio (P=0.009). Positive correlation between SWAP median deviation value with inferior (P=0.032), nasal (P=0.005), 6 o'clock quadrant RNFL thickness (P=0.028), and Imax/Tavg ratio (P=0.023) and negative correlation with Smax/Navg ratio (P=0.005) were detected. There was no correlation between central corneal thickness and peripapillary RNFL thicknesses (P>0.05). There was no relation between SAP median deviation, pattern standard deviation values and RNFL thicknesses and optic disc parameters of the OCT. By contrast significant correlation between several SWAP parameters and OCT parameters were detected. SWAP appeared to outperform achromatic SAP when the same 30-2 method was used.

  11. Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow

    USGS Publications Warehouse

    Lacy, J.R.; Sherwood, C.R.

    2004-01-01

    The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.

  12. Keratoconus: The ABCD Grading System.

    PubMed

    Belin, M W; Duncan, J K

    2016-06-01

    To propose a new keratoconus classification/staging system that utilises current tomographic data and better reflects the anatomical and functional changes seen in keratoconus. A previously published normative database was reanalysed to generate both anterior and posterior average radii of curvature (ARC and PRC) taken from a 3.0 mm optical zone centred on the thinnest point of the cornea. Mean and standard deviations were recorded and anterior data were compared to the existing Amsler-Krumeich (AK) Classification. ARC, PRC, thinnest pachymetry and distance visual acuity were then used to construct a keratoconus classification. 672 eyes of 336 patients were analysed. Anterior and posterior values were 7.65 ± 0.236 mm and 6.26 ± 0.214 mm, respectively, and thinnest pachymetry values were 534.2 ± 30.36 µm. The ARC values were 2.63, 5.47 and 6.44 standard deviations from the mean values of stages 1-3 in the AK classification, respectively. PRC staging uses the same standard deviation gates. The pachymetric values differed by 4.42 and 7.72 standard deviations for stages 2 and 3, respectively. A new keratoconus staging incorporates anterior and posterior curvature, thinnest pachymetric values, and distance visual acuity and consists of stages 0-4 (5 stages). The proposed system closely matches the existing AK classification stages 1-4 on anterior curvature. As it incorporates posterior curvature and thickness measurements based on the thinnest point, rather than apical measurements, the new staging system better reflects the anatomical changes seen in keratoconus. Georg Thieme Verlag KG Stuttgart · New York.

  13. Scanning laser polarimetry using variable corneal compensation in the detection of glaucoma with localized visual field defects.

    PubMed

    Kook, Michael S; Cho, Hyun-soo; Seong, Mincheol; Choi, Jaewan

    2005-11-01

    To evaluate the ability of scanning laser polarimetry parameters and a novel deviation map algorithm to discriminate between healthy and early glaucomatous eyes with localized visual field (VF) defects confined to one hemifield. Prospective case-control study. Seventy glaucomatous eyes with localized VF defects and 66 normal controls. A Humphrey field analyzer 24-2 full-threshold test and scanning laser polarimetry with variable corneal compensation were used. We assessed the sensitivity and specificity of scanning laser polarimetry parameters, sensitivity and cutoff values for scanning laser polarimetry deviation map algorithms at different specificity values (80%, 90%, and 95%) in the detection of glaucoma, and correlations between the algorithms of scanning laser polarimetry and of the pattern deviation derived from Humphrey field analyzer testing. There were significant differences between the glaucoma group and normal subjects in the mean parametric values of the temporal, superior, nasal, inferior, temporal (TSNIT) average, superior average, inferior average, and TSNIT standard deviation (SD) (P<0.05). The sensitivity and specificity of each scanning laser polarimetry variable was as follows: TSNIT, 44.3% (95% confidence interval [CI], 39.8%-49.8%) and 100% (95.4%-100%); superior average, 30% (25.5%-34.5%) and 97% (93.5%-100%); inferior average, 45.7% (42.2%-49.2%) and 100% (95.8%-100%); and TSNIT SD, 30% (25.9%-34.1%) and 97% (93.2%-100%), respectively (when abnormal was defined as P<0.05). Based on nerve fiber indicator cutoff values of > or =30 and > or =51 to indicate glaucoma, sensitivities were 54.3% (50.1%-58.5%) and 10% (6.4%-13.6%), and specificities were 97% (93.2%-100%) and 100% (95.8%-100%), respectively. The range of areas under the receiver operating characteristic curves using the scanning laser polarimetry deviation map algorithm was 0.790 to 0.879. Overall sensitivities combining each probability scale and severity score at 80%, 90%, and 95% specificities were 90.0% (95% CI, 86.4%-93.6%), 71.4% (67.4%-75.4%), and 60.0% (56.2%-63.8%), respectively. There was a statistically significant correlation between the scanning laser polarimetry severity score and the VF severity score (R2 = 0.360, P<0.001). Scanning laser polarimetry parameters may not be sufficiently sensitive to detect glaucomatous patients with localized VF damage. Our algorithm using the scanning laser polarimetry deviation map may enhance the understanding of scanning laser polarimetry printouts in terms of the locality, deviation size, and severity of localized retinal nerve fiber layer defects in eyes with localized VF loss.

  14. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  15. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    PubMed

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  16. Implementation of a dose gradient method into optimization of dose distribution in prostate cancer 3D-CRT plans

    PubMed Central

    Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł

    2014-01-01

    Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411

  17. Evaluating the sensitivity of EQ-5D in a sample of patients with type 2 diabetes mellitus in two tertiary health care facilities in Nigeria.

    PubMed

    Ekwunife, Obinna Ikechukwu; Ezenduka, Charles C; Uzoma, Bede Emeka

    2016-01-12

    The EQ-5D instrument is arguably the most well-known and commonly used generic measure of health status internationally. Although the instrument has been employed in outcomes studies of diabetes mellitus in many countries, it has not yet been used in Nigeria. This study was carried out to assess the sensitivity of the EQ-5D instrument in a sample of Nigerian patients with type 2 diabetes mellitus (T2DM). A cross-sectional study was conducted using the EQ-5D instrument to assess the self-reported quality of life of patients with T2DM attending two tertiary healthcare facilities in south eastern Nigeria consenting patients completed the questionnaire while waiting to see a doctor. A priori hypotheses were examined using multiple regression analysis to model the relationship between the dependent variables (EQ VAS and EQ-5D Index) and hypothesized independent variables. A total of 226 patients with T2DM participated in the study. The average age of participants was 57 years (standard deviation 10 years) and 61.1% were male. The EQ VAS score and EQ-5D index averaged 66.19 (standard deviation 15.42) and 0.78 (standard deviation 0.21) respectively. Number of diabetic complications, number of co-morbidities, patient's age and being educated predicted EQ VAS score by -6.76, -6.15, -0.22, and 4.51 respectively. Also, number of diabetic complications, number of co-morbidities, patient's age and being educated predicted EQ-5D index by -0.12, -0.07, -0.003, and 0.06 respectively.. Our findings indicate that the EQ-5D could adequately capture the burden of type 2 diabetes and related complications among Nigerian patients.

  18. The neuromuscular fatigue induced by repeated scrums generates instability that can be limited by appropriate recovery.

    PubMed

    Morel, B; Hautier, C A

    2017-02-01

    The aim of this study was to evaluate the influence of the fatigue on the machine scrum pushing sagittal forces during repeated scrums and to determine the origin of the knee extensor fatigue. Twelve elite U23 rugby union front row players performed six 6-s scrums every 30 s against a dynamic scrum machine with passive or active recovery. The peak, average, and the standard deviation of the force were measured. A neuromuscular testing procedure of the knee extensors was carried out before and immediately after the repeated scrum protocol including maximal voluntary force, evoked force, and voluntary activation. The average and peak forces did not decrease after six scrums with passive recovery. The standard deviation of the force increased by 70.2 ± 42.7% (P < 0.001). Maximal voluntary/evoked force and voluntary activation decreased (respectively 25.1 ± 7.0%, 14.6 ± 5.5%, and 24 ± 9.9%; P < 0.001). The standard deviation of the force did not increase with active recovery and was associated with lower decrease of maximal voluntary/evoked force and voluntary activation (respectively 12.8 ± 7.9%, 4.9 ± 6.5%, and 7.6 ± 4.1%; all P < 0.01). As a conclusion repeated scrummaging induced an increased machine scrum pushing instability associated with central and peripheral fatigue of the knee extensors. Active recovery seems to limit all these manifestations of fatigue. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Liquid chromatographic determination of histamine in fish, sauerkraut, and wine: interlaboratory study.

    PubMed

    Beljaars, P R; Van Dijk, R; Jonker, K M; Schout, L J

    1998-01-01

    An interlaboratory study of the liquid chromatographic (LC) determination of histamine in fish, sauerkraut, and wine was conducted. Diminuted and homogenized samples were suspended in water followed by clarification of extracts with perchloric acid, filtration, and dilution with water. After LC separation on a reversed-phase C18 column with phosphate buffer (pH 3.0)--acetonitrile (875 + 125, v/v) as mobile phase, histamine was measured fluorometrically (excitation, 340 nm; emission, 455 nm) in samples and standards after postcolumn derivatization with o-phthaldialdehyde (OPA). Fourteen samples (including 6 blind duplicates and 1 split level) containing histamine at about 10-400 mg/kg or mg/L were analyzed singly according to the proposed procedure by 11 laboratories. Results from one participant were excluded from statistical analysis. For all samples analyzed, repeatability relative standard deviations varied from 2.1 to 5.6%, and reproducibility relative standard deviations ranged from 2.2 to 7.1%. Averaged recoveries of histamine for this concentration range varied from 94 to 100%.

  20. A meta-analysis of instructional systems applied in science teaching

    NASA Astrophysics Data System (ADS)

    Willett, John B.; Yamashita, June J. M.; Anderson, Ronald D.

    This article is a report of a meta-analysis on the question: What are the effects of different instructional systems used in science teaching? The studies utilized in this meta-analysis were identified by a process that included a systematic screening of all dissertations completed in the field of science education since 1950, an ERIC search of the literature, a systematic screening of selected research journals, and the standard procedure of identifying potentially relevant studies through examination of the bibliographies of the studies reviewed. In all, the 130 studies coded gave rise to 341 effect sizes. The mean effect size produced over all systems was 0.10 with a standard deviation of 0.41, indicating that, on the average, an innovative teaching system in this sample produced one-tenth of a standard deviation better performance than traditional science teaching. Particular kinds of teaching systems, however, produced results that varied from this overall result. Mean effect sizes were also computed by year of publication, form of publication, grade level, and subject matter.

  1. Minding Impacting Events in a Model of Stochastic Variance

    PubMed Central

    Duarte Queirós, Sílvio M.; Curado, Evaldo M. F.; Nobre, Fernando D.

    2011-01-01

    We introduce a generalization of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold, , and another one when the local standard deviation outnumbers . In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterized by large values of the Hurst exponent (), which are ubiquitous features in complex systems. PMID:21483864

  2. The influence of outliers on results of wet deposition measurements as a function of measurement strategy

    NASA Astrophysics Data System (ADS)

    Slanina, J.; Möls, J. J.; Baard, J. H.

    The results of a wet deposition monitoring experiment, carried out by eight identical wet-only precipitation samplers operating on the basis of 24 h samples, have been used to investigate the accuracy and uncertainties in wet deposition measurements. The experiment was conducted near Lelystad, The Netherlands over the period 1 March 1983-31 December 1985. By rearranging the data for one to eight samplers and sampling periods of 1 day to 1 month both systematic and random errors were investigated as a function of measuring strategy. A Gaussian distribution of the results was observed. Outliers, detected by a Dixon test ( a = 0.05) influenced strongly both the yearly averaged results and the standard deviation of this average as a function of the number of samplers and the length of the sampling period. The systematic bias in bulk elements, using one sampler, varies typically from 2 to 20% and for trace elements from 10 to 500%, respectively. Severe problems are encountered in the case of Zn, Cu, Cr, Ni and especially Cd. For the sensitive detection of trends generally more than one sampler per measuring station is necessary as the standard deviation in the yearly averaged wet deposition is typically 10-20% relative for one sampler. Using three identical samplers, trends of, e.g. 3% per year will be generally detected in 6 years.

  3. [Variation pattern and its affecting factors of three-dimensional landscape in urban residential community of Shenyang].

    PubMed

    Zhang, Pei-Feng; Hu, Yuan-Man; Xiong, Zai-Ping; Liu, Miao

    2011-02-01

    Based on the 1:10000 aerial photo in 1997 and the three QuickBird images in 2002, 2005, and 2008, and by using Barista software and GIS and RS techniques, the three-dimensional information of the residential community in Tiexi District of Shenyang was extracted, and the variation pattern of the three-dimensional landscape in the district during its reconstruction in 1997-2008 and related affecting factors were analyzed with the indices, ie. road density, greening rate, average building height, building height standard deviation, building coverage rate, floor area rate, building shape coefficient, population density, and per capita GDP. The results showed that in 1997-2008, the building area for industry decreased, that for commerce and other public affairs increased, and the area for residents, education, and medical cares basically remained stable. The building number, building coverage rate, and building shape coefficient decreased, while the floor area rate, average building height, height standard deviation, road density, and greening rate increased. Within the limited space of residential community, the containing capacity of population and economic activity increased, and the environment quality also improved to some extent. The variation degree of average building height increased, but the building energy consumption decreased. Population growth and economic development had positive correlations with floor area rate, road density, and greening rate, but negative correlation with building coverage rate.

  4. 7 CFR 400.204 - Notification of deviation from standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...

  5. Toddle temporal-spatial deviation index: Assessment of pediatric gait.

    PubMed

    Cahill-Rowley, Katelyn; Rose, Jessica

    2016-09-01

    This research aims to develop a gait index for use in the pediatric clinic as well as research, that quantifies gait deviation in 18-22 month-old children: the Toddle Temporal-spatial Deviation Index (Toddle TDI). 81 preterm children (≤32 weeks) with very-low-birth-weights (≤1500g) and 42 full-term TD children aged 18-22 months, adjusted for prematurity, walked on a pressure-sensitive mat. Preterm children were administered the Bayley Scales of Infant Development-3rd Edition (BSID-III). Principle component analysis of TD children's temporal-spatial gait parameters quantified raw gait deviation from typical, normalized to an average(standard deviation) Toddle TDI score of 100(10), and calculated for all participants. The Toddle TDI was significantly lower for preterm versus TD children (86 vs. 100, p=0.003), and lower in preterm children with <85 vs. ≥85 BSID-III motor composite scores (66 vs. 89, p=0.004). The Toddle TDI, which by design plateaus at typical average (BSID-III gross motor 8-12), correlated with BSID-III gross motor (r=0.60, p<0.001) and not fine motor (r=0.08, p=0.65) in preterm children with gross motor scores ≤8, suggesting sensitivity to gross motor development. The Toddle TDI demonstrated sensitivity and specificity to gross motor function in very-low-birth-weight preterm children aged 18-22 months, and has been potential as an easily-administered, revealing clinical gait metric. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Home Runs and Humbugs: Comment on Bond and DePaulo (2008)

    ERIC Educational Resources Information Center

    O'Sullivan, Maureen

    2008-01-01

    In 2006, C. F. Bond Jr. and B. M. DePaulo provided a meta-analysis of means and concluded that average lie detection accuracy was significantly greater than chance for most people. Now, they have presented an analysis of standard deviations (C. F. Bond Jr. & B. M. DePaulo, 2008), claiming that there are no reliable individual differences in lie…

  7. Optimal Asset Distribution for Environmental Assessment and Forecasting Based on Observations, Adaptive Sampling, and Numerical Prediction

    DTIC Science & Technology

    2013-03-18

    Soliton Ocean Services Inc. to Steve Ramp to complete the work on the grant. Computations in support of Steve Ramp’s work were carried out by Fred...dominant term, even when averaged over the dark hours, which accounts for the large standard deviation. The net long-wave radiation was small and

  8. Middle Atmosphere Program. Handbook for MAP, Volume 5

    NASA Technical Reports Server (NTRS)

    Sechrist, C. F., Jr. (Editor)

    1982-01-01

    The variability of the stratosphere during the winter in the Northern Hemisphere is considered. Long term monthly mean 30-mbar maps are presented that include geopotential heights, temperatures, and standard deviations of 15 year averages. Latitudinal profiles of mean zonal winds and temperatures are given along with meridional time sections of derived quantities for the winters 1965/66 to 1980/81.

  9. Changing Distributions: How Online College Classes Alter Student and Professor Performance. CEPA Working Paper No. 15-10

    ERIC Educational Resources Information Center

    Bettinger, Eric; Fox, Lindsay; Loeb, Susanna; Taylor, Eric

    2015-01-01

    Online college courses are a rapidly expanding feature of higher education, yet little research identifies their effects. Using an instrumental variables approach and data from DeVry University, this study finds that, on average, online course-taking reduces student learning by one-third to one-quarter of a standard deviation compared to…

  10. Differences between Children with Dyslexia Who Are and Are Not Gifted in Verbal Reasoning

    ERIC Educational Resources Information Center

    Berninger, Virginia W.; Abbott, Robert D.

    2013-01-01

    New findings are presented for children in Grades 1 to 9 who qualified their families for a multigenerational family genetics study of dyslexia (impaired word decoding/spelling) who had either superior verbal reasoning ("n" = 33 at or above 1 2/3 standard deviation, superior or better range; 19% of these children) or average verbal…

  11. Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach.

    PubMed

    Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem

    2013-01-01

    This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff.

  12. Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach

    PubMed Central

    Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem

    2013-01-01

    Objectives This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. Methodology A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. Results The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. Conclusion This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff. PMID:23559904

  13. Concurrent processing of vehicle lane keeping and speech comprehension tasks.

    PubMed

    Cao, Shi; Liu, Yili

    2013-10-01

    With the growing prevalence of using in-vehicle devices and mobile devices while driving, a major concern is their impact on driving performance and safety. However, the effects of cognitive load such as conversation on driving performance are still controversial and not well understood. In this study, an experiment was conducted to investigate the concurrent performance of vehicle lane keeping and speech comprehension tasks with improved experimental control of the confounding factors identified in previous studies. The results showed that the standard deviation of lane position (SDLP) was increased when the driving speed was faster (0.30 m at 36 km/h; 0.36 m at 72 km/h). The concurrent comprehension task had no significant effect on SDLP (0.34 m on average) or the standard deviation of steering wheel angle (SDSWA; 5.20° on average). The correct rate of the comprehension task was reduced in the dual-task condition (from 93.4% to 91.3%) compared with the comprehension single-task condition. Mental workload was significantly higher in the dual-task condition compared with the single-task conditions. Implications for driving safety were discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Generation of random microstructures and prediction of sound velocity and absorption for open foams with spherical pores.

    PubMed

    Zieliński, Tomasz G

    2015-04-01

    This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.

  15. The Standard Deviation of Launch Vehicle Environments

    NASA Technical Reports Server (NTRS)

    Yunis, Isam

    2005-01-01

    Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.

  16. Predicting Energy Consumption for Potential Effective Use in Hybrid Vehicle Powertrain Management Using Driver Prediction

    NASA Astrophysics Data System (ADS)

    Magnuson, Brian

    A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.

  17. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, You; Yin, Fang-Fang; Ren, Lei, E-mail: lei.ren@duke.edu

    2015-08-15

    Purpose: Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Methods: Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated onmore » the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the “gold-standard” on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔD{sub min}), maximum dose (ΔD{sub max}), and mean dose (ΔD{sub mean}), and the absolute deviations of prescription dose coverage (ΔV{sub 100%}) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Results: Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). Conclusions: MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.« less

  18. On the Relation Between Sunspot Area and Sunspot Number

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    Often, the relation between monthly or yearly averages of total sunspot area, A, and sunspot number, R, has been described using the formula A = 16.7 R. Such a simple relation, however, is erroneous. The yearly ratio of A/R has varied between 5.3 in 1964 to 19.7 in 1926, having a mean of 13.1 with a standard deviation of 3.5. For 1875-1976 (corresponding to the Royal Greenwich Observatory timeframe), the yearly ratio of A/R has a mean of 14.1 with a standard deviation of 3.2, and it is found to differ significantly from the mean for 1977-2004 (corresponding to the United States Air Force/National Oceanic and Atmospheric Administration Solar Optical Observing Network timeframe), which equals 9.8 with a standard deviation of 2.1. Scatterplots of yearly values of A versus R are highly correlated for both timeframes and they suggest that a value of R = 100 implies A=1,538 +/- 174 during the first timeframe, but only A=1,076 +/- 123 for the second timeframe. Comparison of the yearly ratios adjusted for same day coverage against yearly ratios using Rome Observatory measures for the interval 1958-1998 indicates that sunspot areas during the second timeframe are inherently too low.

  19. Short-term heart rate variability in dogs with sick sinus syndrome or chronic mitral valve disease as compared to healthy controls.

    PubMed

    Bogucki, Sz; Noszczyk-Nowak, A

    2017-03-28

    Heart rate variability is an established risk factor for mortality in both healthy dogs and animals with heart failure. The aim of this study was to compare short-term heart rate variability (ST-HRV) parameters from 60-min electrocardiograms in dogs with sick sinus syndrome (SSS, n=20) or chronic mitral valve disease (CMVD, n=20) and healthy controls (n=50), and to verify the clinical application of ST-HRV analysis. The study groups differed significantly in terms of both time - and frequency- domain ST-HRV parameters. In the case of dogs with SSS and healthy controls, particularly evident differences pertained to HRV parameters linked directly to the variability of R-R intervals. Lower values of standard deviation of all R-R intervals (SDNN), standard deviation of the averaged R-R intervals for all 5-min segments (SDANN), mean of the standard deviations of all R-R intervals for all 5-min segments (SDNNI) and percentage of successive R-R intervals >50 ms (pNN50) corresponded to a decrease in parasympathetic regulation of heart rate in dogs with CMVD. These findings imply that ST-HRV may be useful for the identification of dogs with SSS and for detection of dysautonomia in animals with CMVD.

  20. A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output

    PubMed Central

    Stevanovic, Stefan; Pervan, Boris

    2018-01-01

    We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250

  1. Impacts of temperature and its variability on mortality in New England

    NASA Astrophysics Data System (ADS)

    Shi, Liuhua; Kloog, Itai; Zanobetti, Antonella; Liu, Pengfei; Schwartz, Joel D.

    2015-11-01

    Rapid build-up of greenhouse gases is expected to increase Earth’s mean surface temperature, with unclear effects on temperature variability. This makes understanding the direct effects of a changing climate on human health more urgent. However, the effects of prolonged exposures to variable temperatures, which are important for understanding the public health burden, are unclear. Here we demonstrate that long-term survival was significantly associated with both seasonal mean values and standard deviations of temperature among the Medicare population (aged 65+) in New England, and break that down into long-term contrasts between ZIP codes and annual anomalies. A rise in summer mean temperature of 1 °C was associated with a 1.0% higher death rate, whereas an increase in winter mean temperature corresponded to a 0.6% decrease in mortality. Increases in standard deviations of temperature for both summer and winter were harmful. The increased mortality in warmer summers was entirely due to anomalies, whereas it was long-term average differences in the standard deviation of summer temperatures across ZIP codes that drove the increased risk. For future climate scenarios, seasonal mean temperatures may in part account for the public health burden, but the excess public health risk of climate change may also stem from changes of within-season temperature variability.

  2. Forecast of Frost Days Based on Monthly Temperatures

    NASA Astrophysics Data System (ADS)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  3. Experimental comparison of icing cloud instruments

    NASA Technical Reports Server (NTRS)

    Olsen, W.; Takeuchi, D. M.; Adams, K.

    1983-01-01

    Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.

  4. Compact hybrid solar simulator with the spectral match beyond class A

    NASA Astrophysics Data System (ADS)

    Baguckis, Artūras; Novičkovas, Algirdas; Mekys, Algirdas; Tamošiūnas, Vincas

    2016-07-01

    A compact hybrid solar simulator with the spectral match beyond class A is proposed. Six types of high-power light-emitting diodes (LEDs) and tungsten halogen lamps in total were employed to obtain spectral match with <25% deviation from the standardized one in twelve spectral ranges between 400 and 1100 nm. All spectral ranges were twice as narrow than required by IEC 60904-9 Ed.2.0 and ASTM E927-10(2015) standards. Nonuniformity of the irradiance was evaluated and <2% deviation from the average value of the irradiance (corresponding to A class nonuniformity) can be obtained for the area of >3-cm diameter. A theoretical analysis was performed to evaluate possible performance of our simulator in the case of GaInP/GaAs/GaInAsP/GaInAs four-junction tandem solar cells and AM1.5D (ASTM G173-03 standard) spectrum. Lack of ultraviolet radiation in comparison to standard spectrum leads to 6.94% reduction of short-circuit current, which could be remedied with 137% increase of the output from blue LEDs. Excess of infrared radiation from halogen lamps outside ranges specified by standards is expected to lead to ˜0.77% voltage increase.

  5. Verbal intelligence in bilinguals when measured in L1 and L2.

    PubMed

    Ardila, Alfredo; Lopez-Recio, Alexandra; Sakowitz, Ariel; Sanchez, Estefania; Sarmiento, Stephanie

    2018-04-04

    This study was aimed at studying the Verbal IQ in two groups of Spanish/English bilinguals: simultaneous and early sequential bilinguals. 48 Spanish/English bilinguals born in the U.S. or Latin American countries but moving to United States before the age of 10 were selected. The verbal subtests of the Wechsler Adult Intelligence Scale (English and Spanish) - Third Edition (WAIS-III) was administered. Overall, performance was significantly better in English for both groups of bilinguals. Verbal IQ difference when tested in Spanish and English was about one standard deviation higher in English for simultaneous bilinguals, and about half standard deviation for early sequential bilinguals. In both groups, Verbal IQ in English was about 100; considering the level of education of our sample (bachelor degree, on average), it can be assumed that Verbal IQ in English was lower than expected, suggesting that bilinguals may be penalized even when evaluated in the dominant language.

  6. Refractive index, molar refraction and comparative refractive index study of propylene carbonate binary liquid mixtures.

    PubMed

    Wankhede, Dnyaneshwar Shamrao

    2012-06-01

    Refractive indices (n) have been experimentally determined for the binary liquid-liquid mixtures of Propylene carbonate (PC) (1) with benzene, ethylbenzene, o-xylene and p-xylene (2) at 298.15, 303.15 and 308.15 K over the entire mole fraction range. The experimental values of n are utilised to calculate deviation in refractive index (Δn), molar refraction (R) and deviation in molar refraction (ΔR). A comparative study of Arago-Biot (A-B), Newton (NW), Eyring and John (E-J) equations for determining refractive index of a liquid has been carried out to test their validity for all the binary mixtures over the entire composition range at 298.15 K. Comparison of various mixing relations is represented in terms of average deviation (AVD). The Δn and ΔR values have been fitted to Redlich-Kister equation at 298.15 K and standard deviations have been calculated. The results are discussed in terms of intermolecular interactions present amongst the components.

  7. Rapid diagnosis of pulmonary tuberculosis

    PubMed Central

    Sarmiento, José Mauricio Hernández; Restrepo, Natalia Builes; Mejía, Gloria Isabel; Zapata, Elsa; Restrepo, Mary Alejandra; Robledo, Jaime

    2014-01-01

    Introduction World Health Organization had estimated 9.4 million tuberculosis cases on 2009, with 1.7 million of deaths as consequence of treatment and diagnosis failures. Improving diagnostic methods for the rapid and timely detection of tuberculosis patients is critical to control the disease. The aim of this study was evaluating the accuracy of the cord factor detection on the solid medium Middlebrook 7H11 thin layer agar compared to the Lowenstein Jensen medium for the rapid tuberculosis diagnosis. Methods Patients with suspected tuberculosis were enrolled and their sputum samples were processed for direct smear and culture on Lowenstein Jensen and BACTEC MGIT 960, from which positive tubes were subcultured on Middlebrook 7H11 thin layer agar. Statistical analysis was performed comparing culture results from Lowenstein Jensen and the thin layer agar, and their corresponding average times for detecting Mycobacterium tuberculosis. The performance of cord factor detection was evaluated determining its sensitivity, specificity, positive and negative predictive value. Results 111 out of 260 patients were positive for M. tuberculosis by Lowenstein Jensen medium with an average time ± standard deviation for its detection of 22.3 ± 8.5 days. 115 patients were positive by the MGIT system identifying the cord factor by the Middlebrook 7H11 thin layer agar which average time ± standard deviation was 5.5 ± 2.6 days. Conclusion The cord factor detection by Middlebrook 7H11 thin layer agar allows early and accurate tuberculosis diagnosis during an average time of 5 days, making this rapid diagnosis particularly important in patients with negative sputum smear. PMID:25419279

  8. Does standard deviation matter? Using "standard deviation" to quantify security of multistage testing.

    PubMed

    Wang, Chun; Zheng, Yi; Chang, Hua-Hua

    2014-01-01

    With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.

  9. GNSS Antenna Caused Near-Field Interference Effect in Precise Point Positioning Results

    NASA Astrophysics Data System (ADS)

    Dawidowicz, Karol; Baryła, Radosław

    2017-06-01

    Results of long-term static GNSS observation processing adjustment prove that the often assumed "averaging multipath effect due to extended observation periods" does not actually apply. It is instead visible a bias that falsifies the coordinate estimation. The comparisons between the height difference measured with a geometrical precise leveling and the height difference provided by GNSS clearly verify the impact of the near-field multipath effect. The aim of this paper is analysis the near-field interference effect with respect to the coordinate domain. We demonstrate that the way of antennas mounting during observation campaign (distance from nearest antennas) can cause visible changes in pseudo-kinematic precise point positioning results. GNSS measured height differences comparison revealed that bias of up to 3 mm can be noticed in Up component when some object (additional GNSS antenna) was placed in radiating near-field region of measuring antenna. Additionally, for both processing scenario (GPS and GPS/GLONASS) the scattering of results clearly increased when additional antenna crosses radiating near-field region of measuring antenna. It is especially true for big choke ring antennas. In short session (15, 30 min.) the standard deviation was about twice bigger in comparison to scenario without additional antenna. When we used typical surveying antennas (short near-field region radius) the effect is almost invisible. In this case it can be observed the standard deviation increase of about 20%. On the other hand we found that surveying antennas are generally characterized by lower accuracy than choke ring antennas. The standard deviation obtained on point with this type of antenna was bigger in all processing scenarios (in comparison to standard deviation obtained on point with choke ring antenna).

  10. Comparison of polyurethane with cyanoacrylate in hemostasis of vascular injury in guinea pigs.

    PubMed

    Kubrusly, Luiz Fernando; Formighieri, Marina Simões; Lago, José Vitor Martins; Graça, Yorgos Luiz Santos de Salles; Sobral, Ana Cristina Lira; Lago, Marianna Martins

    2015-01-01

    To evaluate the behavior of castor oil-derived polyurethane as a hemostatic agent and tissue response after abdominal aortic injury and to compare it with 2-octyl-cyanoacrylate. Twenty-four Guinea Pigs were randomly divided into three groups of eight animals (I, II, and III). The infrarenal abdominal aorta was dissected, clamped proximally and distally to the vascular puncture site. In group I (control), hemostasis was achieved with digital pressure; in group II (polyurethane) castor oil-derived polyurethane was applied, and in group III (cyanoacrylate), 2-octyl-cyanoacrylate was used. Group II was subdivided into IIA and IIB according to the time of preparation of the hemostatic agent. Mean blood loss in groups IIA, IIB and III was 0.002 grams (g), 0.008 g, and 0.170 g, with standard deviation of 0.005 g, 0.005 g, and 0.424 g, respectively (P=0.069). The drying time for cyanoacrylate averaged 81.5 seconds (s) (standard deviation: 51.5 seconds) and 126.1 s (standard deviation: 23.0 s) for polyurethane B (P=0.046). However, there was a trend (P=0.069) for cyanoacrylate to dry more slowly than polyurethane A (mean: 40.5 s; SD: 8.6 s). Furthermore, polyurethane A had a shorter drying time than polyurethane B (P=0.003), mean IIA of 40.5 s (standard deviation: 8.6 s). In group III, 100% of the animals had mild/severe fibrosis, while in group II only 12.5% showed this degree of fibrosis (P=0.001). Polyurethane derived from castor oil showed similar hemostatic behavior to octyl-2-cyanoacrylate. There was less perivascular tissue response with polyurethane when compared with cyanoacrylate.

  11. Comparison of polyurethane with cyanoacrylate in hemostasis of vascular injury in guinea pigs

    PubMed Central

    Kubrusly, Luiz Fernando; Formighieri, Marina Simões; Lago, José Vitor Martins; Graça, Yorgos Luiz Santos de Salles; Sobral, Ana Cristina Lira; Lago, Marianna Martins

    2015-01-01

    Objective To evaluate the behavior of castor oil-derived polyurethane as a hemostatic agent and tissue response after abdominal aortic injury and to compare it with 2-octyl-cyanoacrylate. Methods Twenty-four Guinea Pigs were randomly divided into three groups of eight animals (I, II, and III). The infrarenal abdominal aorta was dissected, clamped proximally and distally to the vascular puncture site. In group I (control), hemostasis was achieved with digital pressure; in group II (polyurethane) castor oil-derived polyurethane was applied, and in group III (cyanoacrylate), 2-octyl-cyanoacrylate was used. Group II was subdivided into IIA and IIB according to the time of preparation of the hemostatic agent. Results Mean blood loss in groups IIA, IIB and III was 0.002 grams (g), 0.008 g, and 0.170 g, with standard deviation of 0.005 g, 0.005 g, and 0.424 g, respectively (P=0.069). The drying time for cyanoacrylate averaged 81.5 seconds (s) (standard deviation: 51.5 seconds) and 126.1 s (standard deviation: 23.0 s) for polyurethane B (P=0.046). However, there was a trend (P=0.069) for cyanoacrylate to dry more slowly than polyurethane A (mean: 40.5 s; SD: 8.6 s). Furthermore, polyurethane A had a shorter drying time than polyurethane B (P=0.003), mean IIA of 40.5 s (standard deviation: 8.6 s). In group III, 100% of the animals had mild/severe fibrosis, while in group II only 12.5% showed this degree of fibrosis (P=0.001). Conclusion Polyurethane derived from castor oil showed similar hemostatic behavior to octyl-2-cyanoacrylate. There was less perivascular tissue response with polyurethane when compared with cyanoacrylate. PMID:25859876

  12. 7 CFR 400.174 - Notification of deviation from financial standards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...

  13. [Comparisons of manual and automatic refractometry with subjective results].

    PubMed

    Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C

    2006-11-01

    Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.

  14. Visual field changes after cataract extraction: the AGIS experience.

    PubMed

    Koucheki, Behrooz; Nouri-Mahdavi, Kouros; Patel, Gitane; Gaasterland, Douglas; Caprioli, Joseph

    2004-12-01

    To test the hypothesis that cataract extraction in glaucomatous eyes improves overall sensitivity of visual function without affecting the size or depth of glaucomatous scotomas. Experimental study with no control group. One hundred fifty-eight eyes (of 140 patients) from the Advanced Glaucoma Intervention Study with at least two reliable visual fields within a year both before and after cataract surgery were included. Average mean deviation (MD), pattern standard deviation (PSD), and corrected pattern standard deviation (CPSD) were compared before and after cataract extraction. To evaluate changes in scotoma size, the number of abnormal points (P < .05) on the pattern deviation plot was compared before and after surgery. We described an index ("scotoma depth index") to investigate changes of scotoma depth after surgery. Mean values for MD, PSD, and CPSD were -13.2, 6.4, and 5.9 dB before and -11.9, 6.8, and 6.2 dB after cataract surgery (P < or = .001 for all comparisons). Mean (+/- SD) number of abnormal points on pattern deviation plot was 26.7 +/- 9.4 and 27.5 +/- 9.0 before and after cataract surgery, respectively (P = .02). Scotoma depth index did not change after cataract extraction (-19.3 vs -19.2 dB, P = .90). Cataract extraction caused generalized improvement of the visual field, which was most marked in eyes with less advanced glaucomatous damage. Although the enlargement of scotomas was statistically significant, it was not clinically meaningful. No improvement of sensitivity was observed in the deepest part of the scotomas.

  15. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  16. A-Weighted Sound Levels in Cockpits of Fixed- and Rotary-Wing Aircraft.

    DTIC Science & Technology

    fixed-wing vehicles and from 98 to 106 dB for helicopters. Means and standard deviations are reported by octave-bands, all-pass (flat), A - levels , and...preferred speech interference levels (PSIL, average of 500, 1000 and 2000 Hz). Also, at-the-ear A - levels are reported for generalized amounts of attenuation provided by headsets commonly worn in aircraft. (Author)

  17. Class in the Classroom: The Relationship between School Resources and Math Performance among Low Socioeconomic Status Students in 19 Rich Countries

    ERIC Educational Resources Information Center

    Baird, Katherine

    2012-01-01

    This paper investigates achievement gaps between low and high socioeconomic students in 19 high-income countries. On average, math scores of students with indicators of high socioeconomic status (SES) are over one standard deviation above those with low SES indicators. The paper estimates the extent to which these achievement gaps can be…

  18. Investigating methods for determining mismatch in near side vehicle impacts - biomed 2009.

    PubMed

    Loftis, Kathryn; Martin, R Shayn; Meredith, J Wayne; Stitzel, Joel

    2009-01-01

    This study investigates vehicle mismatch in severe side-impact motor vehicle collisions. Research conducted by the Insurance Institute for Highway Safety has determined that vehicle mismatch often leads to very severe injuries for occupants in the struck vehicle, because the larger striking vehicle does not engage the lower sill upon impact, resulting in severe intrusions into the occupant compartment. Previous studies have analyzed mismatched collisions according to vehicle type, not by the difference in vehicle height and weight. It is hypothesized that the combination of a heavier striking vehicle at a taller height results in more intrusion for the struck vehicle and severe injury for the near side occupant. By analyzing Crash Injury Research and Engineering Network (CIREN) data and occupant injury severity, it is possible to study intrusion and injuries that occur due to vehicle mismatch. CIREN enrolls seriously injured occupants involved in motor vehicle crashes (MVC) across the United States. From the Toyota-Wake Forest University CIREN center, 23 near side impact cases involving two vehicles were recorded. Only 3 of these seriously injured occupant cases were not considered mismatched according to vehicle curb weight, and only 2 were not considered vehicle mismatched according to height differences. The mismatched CIREN cases had an average difference in vehicle curb weight of 737.0 kg (standard deviation of 646.8) and an average difference in vehicle height of 16.38 cm (standard deviation of 7.186). There were 13 occupants with rib fractures, 12 occupants with pelvic fractures, 9 occupants with pulmonary contusion, and 5 occupants with head injuries, among other multiple injuries. The average Injury Severity Score (ISS) for these occupants was 27, with a standard deviation of 16. The most serious injuries resulted in an Abbreviated Injury Scale (AIS) of 5, which included 3 occupants. Each of these AIS 5 injuries were to different body regions on different occupants. By analyzing the vehicle information and occupant injuries, it was found that the vehicle mismatch problem involves differences in vehicle weights and heights and also results in severe injuries to multiple body regions for the near side occupant involved. There was a low correlation of vehicle height difference to occupant ISS.

  19. 1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...

  20. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  1. CPAP Adherence is Associated With Attentional Improvements in a Group of Primarily Male Patients With Moderate to Severe OSA.

    PubMed

    Deering, Sean; Liu, Lin; Zamora, Tania; Hamilton, Joanne; Stepnowsky, Carl

    2017-12-15

    Obstructive sleep apnea (OSA) is a widespread condition that adversely affects physical health and cognitive functioning. The prevailing treatment for OSA is continuous positive airway pressure (CPAP), but therapeutic benefits are dependent on consistent use. Our goal was to investigate the relationship between CPAP adherence and measures of sustained attention in patients with OSA. Our hypothesis was that the Psychomotor Vigilance Task (PVT) would be sensitive to attention-related improvements resulting from CPAP use. This study was a secondary analysis of a larger clinical trial. Treatment adherence was determined from CPAP use data. Validated sleep-related questionnaires and a sustained-attention and alertness test (PVT) were administered to participants at baseline and at the 6-month time point. Over a 6-month time period, the average CPAP adherence was 3.32 h/night (standard deviation [SD] = 2.53), average improvement in PVT minor lapses was -4.77 (SD = 13.2), and average improvement in PVT reaction time was -73.1 milliseconds (standard deviation = 211). Multiple linear regression analysis showed that higher CPAP adherence was significantly associated with a greater reduction in minor lapses in attention after 6 months of continuous treatment with CPAP therapy (β = -0.72, standard error = 0.34, P = .037). The results of this study showed that higher levels of CPAP adherence were associated with significant improvements in vigilance. Because the PVT is a performance-based measure that is not influenced by prior learning and is not subjective, it may be an important supplement to patient self-reported assessments. Name: Effect of Self-Management on Improving Sleep Apnea Outcomes, URL: https://clinicaltrials.gov/ct2/show/NCT00310310, Identifier: NCT00310310. © 2017 American Academy of Sleep Medicine

  2. Is reticular temperature a useful indicator of heat stress in dairy cattle?

    PubMed

    Ammer, S; Lambertz, C; Gauly, M

    2016-12-01

    The present study investigated whether reticular temperature (RT) in dairy cattle is a useful indicator of heat stress considering the effects of milk yield and water intake (WI). In total, 28 Holstein-Friesian dairy cows raised on 3 farms in Lower Saxony, Germany, were studied from March to December 2013. During the study, RT and barn climate parameters (air temperature, relative humidity) were measured continuously and individual milk yield was recorded daily. Both the daily temperature-humidity index (THI) and the daily median RT per cow were calculated. Additionally, the individual WI (amount and frequency) of 10 cows during 100d of the study was recorded on 1 farm. Averaged over all farms, daily THI ranged between 35.4 and 78.9 with a mean (±standard deviation) of 60.2 (±8.7). Dairy cows were on average (±standard deviation) 110.9d in milk (±79.3) with a mean (±standard deviation) milk yield of 35.2kg/d (±9.1). The RT was affected by THI, milk yield, days in milk, and WI. Up to a THI threshold of 65, RT remained constant at 39.2°C. Above this threshold, RT increased to 39.3°C and further to 39.4°C when THI ≥70. The correlation between THI ≥70 and RT was 0.22, whereas the coefficient ranged between r=-0.08 to +0.06 when THI <70. With increasing milk yield, RT decreased slightly from 39.3°C (<30kg/d) to 39.2°C (≥40kg/d). For daily milk yields of ≥40kg, the median RT and daily milk yield were correlated at r=-0.18. The RT was greater when dairy cows yielded ≥30kg/d and THI ≥70 (39.5°C) compared with milk yields <30kg and THI <70 (39.3°C). The WI, which averaged (±standard deviation) 11.5 l (±5.7) per drinking bout, caused a mean decrease in RT of 3.2°C and was affected by the amount of WI (r=0.60). After WI, it took up to 2h until RT reached the initial level before drinking. In conclusion, RT increased when the THI threshold of 65 was exceeded. A further increase was noted when THI ≥70. Nevertheless, the effects of WI and milk yield have to be considered carefully when RT is used to detect hyperthermia in dairy cattle. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Nuclear isospin effect on α-decay half-lives

    NASA Astrophysics Data System (ADS)

    Akrawy, Dashty T.; Hassanabadi, H.; Hosseini, S. S.; Santhosh, K. P.

    2018-07-01

    The α-decay half-lives for the even-even, even-odd, odd-even and odd-odd of 356 nuclei in the range 52 ≤Zp ≤ 118 have been studied within the analytical formula of Royer and also within the modified analytical formula of Royer. We calculated the new coefficient of the Royer by fitting 356 isotopes. Also, we considered the Denisov and Khudenko formula and obtained the new coefficient for the modified Denisov and Khudenko formula. We calculated the standard deviation and the average deviation. Analytical results are compared with the experimental data. The results are in better agreement with the experimental data when the effect of the isospin considered for the parent nuclei.

  4. Application of the thermorheologically complex nonlinear Adam-Gibbs model for the glass transition to molecular motion in hydrated proteins.

    PubMed

    Hodge, Ian M

    2006-08-01

    The nonlinear thermorheologically complex Adam Gibbs (extended "Scherer-Hodge") model for the glass transition is applied to enthalpy relaxation data reported by Sartor, Mayer, and Johari for hydrated methemoglobin. A sensible range in values for the average localized activation energy is obtained (100-200 kJ mol(-1)). The standard deviation in the inferred Gaussian distribution of activation energies, computed from the reported KWW beta-parameter, is approximately 30% of the average, consistent with the suggestion that some relaxation processes in hydrated proteins have exceptionally low activation energies.

  5. Differences between Non-arteritic Anterior Ischemic Optic Neuropathy and Open Angle Glaucoma with Altitudinal Visual Field Defect.

    PubMed

    Han, Sangyoun; Jung, Jong Jin; Kim, Ungsoo Samuel

    2015-12-01

    To investigate the differences in retinal nerve fiber layer (RNFL) change and optic nerve head parameters between non-arteritic anterior ischemic optic neuropathy (NAION) and open angle glaucoma (OAG) with altitudinal visual field defect. Seventeen NAION patients and 26 OAG patients were enrolled prospectively. The standard visual field indices (mean deviation, pattern standard deviation) were obtained from the Humphrey visual field test and differences between the two groups were analyzed. Cirrus HD-OCT parameters were used, including optic disc head analysis, average RNFL thickness, and RNFL thickness of each quadrant. The mean deviation and pattern standard deviation were not significantly different between the groups. In the affected eye, although the disc area was similar between the two groups (2.00 ± 0.32 and 1.99 ± 0.33 mm(2), p = 0.586), the rim area of the OAG group was smaller than that of the NAION group (1.26 ± 0.56 and 0.61 ± 0.15 mm(2), respectively, p < 0.001). RNFL asymmetry was not different between the two groups (p = 0.265), but the inferior RNFL thickness of both the affected and unaffected eyes were less in the OAG group than in the NAION group. In the analysis of optic disc morphology, both affected and unaffected eyes showed significant differences between two groups. To differentiate NAION from OAG in eyes with altitudinal visual field defects, optic disc head analysis of not only the affected eye, but also the unaffected eye, by using spectral domain optical coherence tomography may be helpful.

  6. Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide

    DTIC Science & Technology

    1981-02-01

    SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway

  7. CT, MR, and ultrasound image artifacts from prostate brachytherapy seed implants: The impact of seed size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Andrew K. H.; Basran, Parminder S.; Thomas, Steven D.

    Purpose: To investigate the effects of brachytherapy seed size on the quality of x-ray computed tomography (CT), ultrasound (US), and magnetic resonance (MR) images and seed localization through comparison of the 6711 and 9011 {sup 125}I sources. Methods: For CT images, an acrylic phantom mimicking a clinical implantation plan and embedded with low contrast regions of interest (ROIs) was designed for both the 0.774 mm diameter 6711 (standard) and the 0.508 mm diameter 9011 (thin) seed models (Oncura, Inc., and GE Healthcare, Arlington Heights, IL). Image quality metrics were assessed using the standard deviation of ROIs between the seeds andmore » the contrast to noise ratio (CNR) within the low contrast ROIs. For US images, water phantoms with both single and multiseed arrangements were constructed for both seed sizes. For MR images, both seeds were implanted into a porcine gel and imaged with pelvic imaging protocols. The standard deviation of ROIs and CNR values were used as metrics of artifact quantification. Seed localization within the CT images was assessed using the automated seed finder in a commercial brachytherapy treatment planning system. The number of erroneous seed placements and the average and maximum error in seed placements were recorded as metrics of the localization accuracy. Results: With the thin seeds, CT image noise was reduced from 48.5 {+-} 0.2 to 32.0 {+-} 0.2 HU and CNR improved by a median value of 74% when compared with the standard seeds. Ultrasound image noise was measured at 50.3 {+-} 17.1 dB for the thin seed images and 50.0 {+-} 19.8 dB for the standard seed images, and artifacts directly behind the seeds were smaller and less prominent with the thin seed model. For MR images, CNR of the standard seeds reduced on average 17% when using the thin seeds for all different imaging sequences and seed orientations, but these differences are not appreciable. Automated seed localization required an average ({+-}SD) of 7.0 {+-} 3.5 manual corrections in seed positions for the thin seed scans and 3.0 {+-} 1.2 manual corrections in seed positions for the standard seed scans. The average error in seed placement was 1.2 mm for both seed types and the maximum error in seed placement was 2.1 mm for the thin seed scans and 1.8 mm for the standard seed scans. Conclusions: The 9011 thin seeds yielded significantly improved image quality for CT and US images but no significant differences in MR image quality.« less

  8. The Deep Space Network stability analyzer

    NASA Technical Reports Server (NTRS)

    Breidenthal, Julian C.; Greenhall, Charles A.; Hamell, Robert L.; Kuhnle, Paul F.

    1995-01-01

    A stability analyzer for testing NASA Deep Space Network installations during flight radio science experiments is described. The stability analyzer provides realtime measurements of signal properties of general experimental interest: power, phase, and amplitude spectra; Allan deviation; and time series of amplitude, phase shift, and differential phase shift. Input ports are provided for up to four 100 MHz frequency standards and eight baseband analog (greater than 100 kHz bandwidth) signals. Test results indicate the following upper bounds to noise floors when operating on 100 MHz signals: -145 dBc/Hz for phase noise spectrum further than 200 Hz from carrier, 2.5 x 10(exp -15) (tau =1 second) and 1.5 x 10(exp -17) (tau =1000 seconds) for Allan deviation, and 1 x 10(exp -4) degrees for 1-second averages of phase deviation. Four copies of the stability analyzer have been produced, plus one transportable unit for use at non-NASA observatories.

  9. The performance of the standard rate turn (SRT) by student naval helicopter pilots.

    PubMed

    Chapman, F; Temme, L A; Still, D L

    2001-04-01

    During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.

  10. The determination of ethyl glucuronide in hair: Experiences from nine consecutive interlaboratory comparison rounds.

    PubMed

    Becker, R; Lô, I; Sporkert, F; Baumgartner, M

    2018-07-01

    The increasing request for hair ethyl glucuronide (HEtG) in alcohol consumption monitoring according to cut-off levels set by the Society of Hair Testing (SoHT) has triggered a proficiency testing program based on interlaboratory comparisons (ILC). Here, the outcome of nine consecutive ILC rounds organised by the SoHT on the determination of HEtG between 2011 and 2017 is summarised regarding interlaboratory reproducibility and the influence of procedural variants. Test samples prepared from cut hair (1mm) with authentic (in-vivo incorporated) and soaked (in-vitro incorporated) HEtG concentrations up to 80pg/mg were provided for 27-35 participating laboratories. Laboratory results were evaluated according to ISO 5725-5 and provided robust averages and relative reproducibility standard deviations typically between 20 and 35% in reasonable accordance with the prediction of the Horwitz model. Evaluation of results regarding the analytical techniques revealed no significant differences between gas and liquid chromatographic methods In contrast, a detailed evaluation of different sample preparations revealed significantly higher average values in case when pulverised hair is tested compared to cut hair. This observation was reinforced over the different ILC rounds and can be attributed to the increased acceptance and routine of hair pulverisation among laboratories. Further, the reproducibility standard deviations among laboratories performing pulverisation were on average in very good agreement with the prediction of the Horwitz model. Use of sonication showed no effect on the HEtG extraction yield. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Neutronics Investigations for the Lower Part of a Westinghouse SVEA-96+ Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, M.F.; Luethi, A.; Seiler, R.

    2002-05-15

    Accurate critical experiments have been performed for the validation of total fission (F{sub tot}) and {sup 238}U-capture (C{sub 8}) reaction rate distributions obtained with CASMO-4, HELIOS, BOXER, and MCNP4B for the lower axial region of a real Westinghouse SVEA-96+ fuel assembly. The assembly comprised fresh fuel with an average {sup 235}U enrichment of 4.02 wt%, a maximum enrichment of 4.74 wt%, 14 burnable-absorber fuel pins, and full-density water moderation. The experimental configuration investigated was core 1A of the LWR-PROTEUS Phase I project, where 61 different fuel pins, representing {approx}64% of the assembly, were gamma-scanned individually. Calculated (C) and measured (E)more » values have been compared in terms of C/E distributions. For F{sub tot}, the standard deviations are 1.2% for HELIOS, 0.9% for CASMO-4, 0.8% for MCNP4B, and 1.7% for BOXER. Standard deviations of 1.1% for HELIOS, CASMO-4, and MCNP4B and 1.2% for BOXER were obtained in the case of C{sub 8}. Despite the high degree of accuracy observed on the average, it was found that the five burnable-absorber fuel pins investigated showed a noticeable underprediction of F{sub tot}, quite systematically, for the deterministic codes evaluated (average C/E for the burnable-absorber fuel pins in the range 0.974 to 0.988, depending on the code)« less

  12. THE DISPERSION OF HERBACEOUS PLANT POLLEN IN ITO CITY, SHIZUOKA.

    PubMed

    Fujii, Mayumi; Makiyama, Kiyoshi; Okazaki, Kenji; Hisamatsu, Kenichi

    2016-08-01

    Airborne pollen was examined in Ito City, Shizuoka for the purpose of treatment and prophylaxis pollen allergies because the patients with pollen allergy to herbaceous plants have recently increased. Setting up a Durham's sampler, we measured airborne pollen identified and classified: Poaceae, Polygonaceae, Amaranthaceae, Urticaceae, Cannabaceae, Ambrosia and Artemisia indica.We studied whether each airborne pollen count has something to do with weather condition (2004-2015). Average total airborne Poaceae pollen count and standard deviation from January to June was 19.4±5.5 cells/cm(2), average total airborne Polygonaceae pollen count and standard deviation from April to September was 11.6±13.4 cells/cm(2). Airborne Poaceae, Amaranthaceae, Cannabaceae, Uriticaceae. Ambrosia and Artamisia indica pollen count from July to Deccember in order: 34.0±15.5 cells/cm(2), 1.3±1.1 cells/cm(2), 8.7±6.4cells/cm(2), 4.9±6.4 cells/cm(2), 10.5±7.8 cells/cm(2), and 13.6±16.3 cells/cm(2).Cannabaceae admitted that its airborne pollen count has negative correlation to the rainfall.Artemisia indica admitted that its airborne pollen count has negative correlation to the average temperature. Herbaceous plants pollen doesn't cause allergies because it is much less than tree pollen in ItoCity.It is thought that the diversity of the plants keep the people from having a serious allergy to pollen with awarm weather in this area.

  13. Multiplets: Their behavior and utility at dacitic and andesitic volcanic centers

    USGS Publications Warehouse

    Thelen, W.; Malone, S.; West, M.

    2011-01-01

    Multiplets, or groups of earthquakes with similar waveforms, are commonly observed at volcanoes, particularly those exhibiting unrest. Using triggered seismic data from the 1980-1986 Mount St. Helens (MSH) eruption, we have constructed a catalog of multiplet occurrence. Our analysis reveals that the occurrence of multiplets is related, at least in part, to the viscosity of the magma. We also constructed catalogs of multiplet occurrence using continuous seismic data from the 2004 eruption at MSH and 2007 eruption at Bezymianny Volcano, Russia. Prior to explosions at MSH in 2004 and Bezymianny in 2007, the multiplet proportion of total seismicity (MPTS) declined, while the average amplitudes and standard deviations of the average amplitude increased. The life spans of multiplets (time between the first and last event) were also shorter prior to explosions than during passive lava extrusion. Dome-forming eruptions that include a partially solidified plug, like MSH (1983-1986 and 2004-2008), often possess multiplets with longer life spans and MPTS values exceeding 50%. Conceptually, the relatively unstable environment prior to explosions is characterized by large and variable stress gradients brought about by rapidly changing overpressures within the conduit. We infer that such complex stress fields affect the number of concurrent families, MPTS, average amplitude, and standard deviation of the amplitude of the multiplets. We also argue that multiplet detection may be an important new monitoring tool for determining the timing of explosions and in forecasting the type of eruption.

  14. LANDSAT-4 horizon scanner full orbit data averages

    NASA Technical Reports Server (NTRS)

    Stanley, J. P.; Bilanow, S.

    1983-01-01

    Averages taken over full orbit data spans of the pitch and roll residual measurement errors of the two conical Earth sensors operating on the LANDSAT 4 spacecraft are described. The variability of these full orbit averages over representative data throughtout the year is analyzed to demonstrate the long term stability of the sensor measurements. The data analyzed consist of 23 segments of sensor measurements made at 2 to 4 week intervals. Each segment is roughly 24 hours in length. The variation of full orbit average as a function of orbit within a day as a function of day of year is examined. The dependence on day of year is based on association the start date of each segment with the mean full orbit average for the segment. The peak-to-peak and standard deviation values of the averages for each data segment are computed and their variation with day of year are also examined.

  15. Nutrient intake values (NIVs): a recommended terminology and framework for the derivation of values.

    PubMed

    King, Janet C; Vorster, Hester H; Tome, Daniel G

    2007-03-01

    Although most countries and regions around the world set recommended nutrient intake values for their populations, there is no standardized terminology or framework for establishing these standards. Different terms used for various components of a set of dietary standards are described in this paper and a common set of terminology is proposed. The recommended terminology suggests that the set of values be called nutrient intake values (NIVs) and that the set be composed of three different values. The average nutrient requirement (ANR) reflects the median requirement for a nutrient in a specific population. The individual nutrient level (INLx) is the recommended level of nutrient intake for all healthy people in the population, which is set at a certain level x above the mean requirement. For example, a value set at 2 standard deviations above the mean requirement would cover the needs of 98% of the population and would be INL98. The third component of the NIVs is an upper nutrient level (UNL), which is the highest level of daily nutrient intake that is likely to pose no risk of adverse health effects for almost all individuals in a specified life-stage group. The proposed framework for deriving a set of NIVs is based on a statistical approach for determining the midpoint of a distribution of requirements for a set of nutrients in a population (the ANR), the standard deviation of the requirements, and an individual nutrient level that assures health at some point above the mean, e.g., 2 standard deviations. Ideally, a second set of distributions of risk of excessive intakes is used as the basis for a UNL.

  16. Basic life support: evaluation of learning using simulation and immediate feedback devices1.

    PubMed

    Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi

    2017-10-30

    to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.

  17. Assessment of corneal epithelial thickness in dry eye patients.

    PubMed

    Cui, Xinhan; Hong, Jiaxu; Wang, Fei; Deng, Sophie X; Yang, Yujing; Zhu, Xiaoyu; Wu, Dan; Zhao, Yujin; Xu, Jianjiang

    2014-12-01

    To investigate the features of corneal epithelial thickness topography with Fourier-domain optical coherence tomography (OCT) in dry eye patients. In this cross-sectional study, 100 symptomatic dry eye patients and 35 normal subjects were enrolled. All participants answered the ocular surface disease index questionnaire and were subjected to OCT, corneal fluorescein staining, tear breakup time, Schirmer 1 test without anesthetic (S1t), and meibomian morphology. Several epithelium statistics for each eye, including central, superior, inferior, minimum, maximum, minimum - maximum, and map standard deviation, were averaged. Correlations of epithelial thickness with the symptoms of dry eye were calculated. The mean (±SD) central, superior, and inferior corneal epithelial thickness was 53.57 (±3.31) μm, 52.00 (±3.39) μm, and 53.03 (±3.67) μm in normal eyes and 52.71 (±2.83) μm, 50.58 (±3.44) μm, and 52.53 (±3.36) μm in dry eyes, respectively. The superior corneal epithelium was thinner in dry eye patients compared with normal subjects (p = 0.037), whereas central and inferior epithelium were not statistically different. In the dry eye group, patients with higher severity grades had thinner superior (p = 0.017) and minimum (p < 0.001) epithelial thickness, more wide range (p = 0.032), and greater deviation (p = 0.003). The average central epithelial thickness had no correlation with tear breakup time, S1t, or the severity of meibomian glands, whereas average superior epithelial thickness positively correlated with S1t (r = 0.238, p = 0.017). Fourier-domain OCT demonstrated that the thickness map of the dry eye corneal epithelium was thinner than normal eyes in the superior region. In more severe dry eye disease patients, the superior and minimum epithelium was much thinner, with a greater range of map standard deviation.

  18. Time-dependent gravity in Southern California, May 1974 to April 1979

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. H.; Franzen, W. O.; Given, J. W.; Pechmann, J. C.; Ruff, L. J.

    1980-01-01

    The Southern California gravity survey, begun in May 1974 to obtain high spatial and temporal density gravity measurements to be coordinated with long-baseline three dimensional geodetic measurements of the Astronomical Radio Interferometric Earth Surveying project, is presented. Gravity data was obtained from 28 stations located in and near the seismically active San Gabriel section of the Southern California Transverse Ranges and adjoining San Andreas Fault at intervals of one to two months using gravity meters relative to a base station standard meter. A single-reading standard deviation of 11 microGal is obtained which leads to a relative deviation of 16 microGal between stations, with data averaging reducing the standard error to 2 to 3 microGal. The largest gravity variations observed are found to correlate with nearby well water variations and smoothed rainfall levels, indicating the importance of ground water variations to gravity measurements. The largest earthquake to occur during the survey, which extended to April, 1979, is found to be accompanied in the station closest to the earthquake by the largest measured gravity changes that cannot be related to factors other than tectonic distortion.

  19. Comparison of the Relationship between Words Retained and Intelligence for Three Instructional Strategies among Students with Below-Average IQ

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Boice, Christina H.

    2009-01-01

    The current study replicated MacQuarrie, Tucker, Burns, and Hartman (2002) with a sample of 20 students who had been identified with a disability and had an IQ score that was between 1 and 3 standard deviations below the normative mean. Each student was taught 27 words from the Esperanto International Language with the following conditions: (a)…

  20. Government and Happiness in 130 Nations: Good Governance Fosters Higher Level and More Equality of Happiness

    ERIC Educational Resources Information Center

    Ott, J. C.

    2011-01-01

    There are substantial differences in happiness in nations. Average happiness on scale 0-10 ranges in 2006 from 3.24 in Togo to 8.00 in Denmark and the inequality of happiness, as measured by the standard deviation, ranges from 0.85 in Laos to 3.02 in the Dominican Republic. Much of these differences are due to quality of governance and in…

  1. An Assessment of the Condition of Coral Reefs off the Former Navy Bombing Ranges at Isla De Culebra and Isla De Vieques, Puerto Rico

    DTIC Science & Technology

    2005-04-01

    Bray-Curtis distance measure with an Unweighted Pair Group Method with Arithmetic Averages ( UPGMA ) linkage method to perform a cluster analysis of the...59 35 Comparison of reef condition indicators clustering by UPGMA analysis...Polyvinyl Chloride RBD Red-band Disease SACEX Supporting Arms Coordination Exercise SAV Submerged Aquatic Vegetation SD Standard Deviation UPGMA

  2. Instructor/Operator Display Evaluation Methods

    DTIC Science & Technology

    1981-03-01

    14 IV. Experimental Desln.n ........ .................... ... 16 V. Procedure ............................................. 18 VI, Results...plies and thus the length of time during which the information must be retained. 16 ...... Nu -n4- (0 4-)~ R IW~~ IH I CDWC * I t I’ CI I~ _ _ I _ _17...standard deviation (S.D.) of 3.0 years. Total flight hours averaged 2248 (S.D. = 858). Current equipment for 16 pilots was the C-130 in which they

  3. A stochastic approach to noise modeling for barometric altimeters.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-11-18

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  4. Intra-operative ultrasound-based augmented reality guidance for laparoscopic surgery.

    PubMed

    Singla, Rohit; Edgcumbe, Philip; Pratt, Philip; Nguan, Christopher; Rohling, Robert

    2017-10-01

    In laparoscopic surgery, the surgeon must operate with a limited field of view and reduced depth perception. This makes spatial understanding of critical structures difficult, such as an endophytic tumour in a partial nephrectomy. Such tumours yield a high complication rate of 47%, and excising them increases the risk of cutting into the kidney's collecting system. To overcome these challenges, an augmented reality guidance system is proposed. Using intra-operative ultrasound, a single navigation aid, and surgical instrument tracking, four augmentations of guidance information are provided during tumour excision. Qualitative and quantitative system benefits are measured in simulated robot-assisted partial nephrectomies. Robot-to-camera calibration achieved a total registration error of 1.0 ± 0.4 mm while the total system error is 2.5 ± 0.5 mm. The system significantly reduced healthy tissue excised from an average (±standard deviation) of 30.6 ± 5.5 to 17.5 ± 2.4 cm 3 ( p < 0.05) and reduced the depth from the tumor underside to cut from an average (±standard deviation) of 10.2 ± 4.1 to 3.3 ± 2.3 mm ( p < 0.05). Further evaluation is required in vivo, but the system has promising potential to reduce the amount of healthy parenchymal tissue excised.

  5. Correlation between Macular Thickness and Visual Field in Early Open Angle Glaucoma: A Cross-Sectional Study.

    PubMed

    Fallahi Motlagh, Behzad; Sadeghi, Ali

    2017-01-01

    The aim of this study was to correlate macular thickness and visual field parameters in early glaucoma. A total of 104 eyes affected with early glaucoma were examined in a cross-sectional, prospective study. Visual field testing using both standard automated perimetry (SAP) and shortwave automated perimetry (SWAP) was performed. Global visual field parameters, including mean deviation (MD) and pattern standard deviation (PSD), were recorded and correlated with spectral domain optical coherence tomography (SD-OCT)-measured macular thickness and asymmetry. Average macular thickness correlated significantly with all measures of visual field including MD-SWAP (r = 0.42), MD-SAP (r = 0.41), PSD-SWAP (r = -0.23), and PSD-SAP (r = -0.21), with P-values <0.001 for all correlations. The mean MD scores (using both SWAP and SAP) were significantly higher in the eyes with thin than in those with intermediate average macular thickness. Intraeye (superior macula thickness - inferior macula thickness) asymmetries correlated significantly with both PSD-SWAP (r = 0.63, P < 0.001) and PSD-SAP (r = 0.26, P = 0.01) scores. This study revealed a significant correlation between macular thickness and visual field parameters in early glaucoma. The results of this study should make macular thickness measurements even more meaningful to glaucoma specialists.

  6. Ordovician Jeleniów Claystone Formation of the Holy Cross Mountains, Poland - Reconstruction of Redox Conditions Using Pyrite Framboid Study

    NASA Astrophysics Data System (ADS)

    Smolarek, Justyna; Marynowski, Leszek; Trela, Wiesław

    2014-09-01

    The aim of this research is to reconstruct palaeoredox conditions during sedimentation of the Jeleniów Claystone Formation deposits, using framboid pyrite diameter measurements. Analysis of pyrite framboids diameter distribution is an effective method in the palaeoenvironmental interpretation which allow for a more detailed insight into the redox conditions, and thus the distinction between euxinic, dysoxic and anoxic conditions. Most of the samples is characterized by framboid indicators typical for anoxic/euxinic conditions in the water column, with average (mean) values ranging from 5.29 to 6.02 urn and quite low standard deviation (SD) values ranging from 1.49 to 3.0. The remaining samples have shown slightly higher values of framboid diameter typical for upper dysoxic conditions, with average values (6.37 to 7.20 um) and low standard deviation (SD) values (1.88 to 2.88). From the depth of 75.5 m till the shallowest part of the Jeleniów Claystone Formation, two samples have been examined and no framboids has been detected. Because secondary weathering should be excluded, the lack of framboids possibly indicates oxic conditions in the water column. Oxic conditions continue within the Wólka Formation based on the lack of framboids in the ZB 51.6 sample.

  7. Ordovician Jeleniów Claystone Formation of the Holy Cross Mountains, Poland - Reconstruction of redox conditions using pyrite framboid study

    NASA Astrophysics Data System (ADS)

    Smolarek, Justyna; Marynowski, Leszek; Trela, Wiesław

    2014-09-01

    The aim of this research is to reconstruct palaeoredox conditions during sedimentation of the Jeleniów Claystone Formation deposits, using framboid pyrite diameter measurements. Analysis of pyrite framboids diameter distribution is an effective method in the palaeoenvironmental interpretation which allow for a more detailed insight into the redox conditions, and thus the distinction between euxinic, dysoxic and anoxic conditions. Most of the samples is characterized by framboid indicators typical for anoxic/euxinic conditions in the water column, with average (mean) values ranging from 5.29 to 6.02 μm and quite low standard deviation (SD) values ranging from 1.49 to 3.0. The remaining samples have shown slightly higher values of framboid diameter typical for upper dysoxic conditions, with average values (6.37 to 7.20 μm) and low standard deviation (SD) values (1.88 to 2.88). From the depth of 75.5 m till the shallowest part of the Jeleniów Claystone Formation, two samples have been examined and no framboids has been detected. Because secondary weathering should be excluded, the lack of framboids possibly indicates oxic conditions in the water column. Oxic conditions continue within the Wólka Formation based on the lack of framboids in the ZB 51.6 sample

  8. Using an external gating signal to estimate noise in PET with an emphasis on tracer avid tumors

    NASA Astrophysics Data System (ADS)

    Schmidtlein, C. R.; Beattie, B. J.; Bailey, D. L.; Akhurst, T. J.; Wang, W.; Gönen, M.; Kirov, A. S.; Humm, J. L.

    2010-10-01

    The purpose of this study is to establish and validate a methodology for estimating the standard deviation of voxels with large activity concentrations within a PET image using replicate imaging that is immediately available for use in the clinic. To do this, ensembles of voxels in the averaged replicate images were compared to the corresponding ensembles in images derived from summed sinograms. In addition, the replicate imaging noise estimate was compared to a noise estimate based on an ensemble of voxels within a region. To make this comparison two phantoms were used. The first phantom was a seven-chamber phantom constructed of 1 liter plastic bottles. Each chamber of this phantom was filled with a different activity concentration relative to the lowest activity concentration with ratios of 1:1, 1:1, 2:1, 2:1, 4:1, 8:1 and 16:1. The second phantom was a GE Well-Counter phantom. These phantoms were imaged and reconstructed on a GE DSTE PET/CT scanner with 2D and 3D reprojection filtered backprojection (FBP), and with 2D- and 3D-ordered subset expectation maximization (OSEM). A series of tests were applied to the resulting images that showed that the region and replicate imaging methods for estimating standard deviation were equivalent for backprojection reconstructions. Furthermore, the noise properties of the FBP algorithms allowed scaling the replicate estimates of the standard deviation by a factor of 1/\\sqrt{N}, where N is the number of replicate images, to obtain the standard deviation of the full data image. This was not the case for OSEM image reconstruction. Due to nonlinearity of the OSEM algorithm, the noise is shown to be both position and activity concentration dependent in such a way that no simple scaling factor can be used to extrapolate noise as a function of counts. The use of the Well-Counter phantom contributed to the development of a heuristic extrapolation of the noise as a function of radius in FBP. In addition, the signal-to-noise ratio for high uptake objects was confirmed to be higher with backprojection image reconstruction methods. These techniques were applied to several patient data sets acquired in either 2D or 3D mode, with 18F (FLT and FDG). Images of the standard deviation and signal-to-noise ratios were constructed and the standard deviations of the tumors' uptake were determined. Finally, a radial noise extrapolation relationship deduced in this paper was applied to patient data.

  9. Uncertainty in Vs30-based site response

    USGS Publications Warehouse

    Thompson, Eric M.; Wald, David J.

    2016-01-01

    Methods that account for site response range in complexity from simple linear categorical adjustment factors to sophisticated nonlinear constitutive models. Seismic‐hazard analysis usually relies on ground‐motion prediction equations (GMPEs); within this framework site response is modeled statistically with simplified site parameters that include the time‐averaged shear‐wave velocity to 30 m (VS30) and basin depth parameters. Because VS30 is not known in most locations, it must be interpolated or inferred through secondary information such as geology or topography. In this article, we analyze a subset of stations for which VS30 has been measured to address effects of VS30 proxies on the uncertainty in the ground motions as modeled by GMPEs. The stations we analyze also include multiple recordings, which allow us to compute the repeatable site effects (or empirical amplification factors [EAFs]) from the ground motions. Although all methods exhibit similar bias, the proxy methods only reduce the ground‐motion standard deviations at long periods when compared to GMPEs without a site term, whereas measured VS30 values reduce the standard deviations at all periods. The standard deviation of the ground motions are much lower when the EAFs are used, indicating that future refinements of the site term in GMPEs have the potential to substantially reduce the overall uncertainty in the prediction of ground motions by GMPEs.

  10. Variability in Wechsler Adult Intelligence Scale-IV subtest performance across age.

    PubMed

    Wisdom, Nick M; Mignogna, Joseph; Collins, Robert L

    2012-06-01

    Normal Wechsler Adult Intelligence Scale (WAIS)-IV performance relative to average normative scores alone can be an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs with increasing age. The purpose of the present study is to characterize the patterns of raw score change and associated variability on WAIS-IV subtests across age groupings. Raw WAIS-IV subtest means and standard deviations for each age group were tabulated from the WAIS-IV normative manual along with the coefficient of variation (CV), a measure of score dispersion calculated by dividing the standard deviation by the mean and multiplying by 100. The CV further informs the magnitude of variability represented by each standard deviation. Raw mean scores predictably decreased across age groups. Increased variability was noted in Perceptual Reasoning and Processing Speed Index subtests, as Block Design, Matrix Reasoning, Picture Completion, Symbol Search, and Coding had CV percentage increases ranging from 56% to 98%. In contrast, Working Memory and Verbal Comprehension subtests were more homogeneous with Digit Span, Comprehension, Information, and Similarities percentage of the mean increases ranging from 32% to 43%. Little change in the CV was noted on Cancellation, Arithmetic, Letter/Number Sequencing, Figure Weights, Visual Puzzles, and Vocabulary subtests (<14%). A thorough understanding of age-related subtest variability will help to identify test limitations as well as further our understanding of cognitive domains which remain relatively steady versus those which steadily decline.

  11. Determination of Wastewater Compounds in Whole Water by Continuous Liquid-Liquid Extraction and Capillary-Column Gas Chromatography/Mass Spectrometry

    USGS Publications Warehouse

    Zaugg, Steven D.; Smith, Steven G.; Schroeder, Michael P.

    2006-01-01

    A method for the determination of 69 compounds typically found in domestic and industrial wastewater is described. The method was developed in response to increasing concern over the impact of endocrine-disrupting chemicals on aquatic organisms in wastewater. This method also is useful for evaluating the effects of combined sanitary and storm-sewer overflow on the water quality of urban streams. The method focuses on the determination of compounds that are indicators of wastewater or have endocrine-disrupting potential. These compounds include the alkylphenol ethoxylate nonionic surfactants, food additives, fragrances, antioxidants, flame retardants, plasticizers, industrial solvents, disinfectants, fecal sterols, polycyclic aromatic hydrocarbons, and high-use domestic pesticides. Wastewater compounds in whole-water samples were extracted using continuous liquid-liquid extractors and methylene chloride solvent, and then determined by capillary-column gas chromatography/mass spectrometry. Recoveries in reagent-water samples fortified at 0.5 microgram per liter averaged 72 percent ? 8 percent relative standard deviation. The concentration of 21 compounds is always reported as estimated because method recovery was less than 60 percent, variability was greater than 25 percent relative standard deviation, or standard reference compounds were prepared from technical mixtures. Initial method detection limits averaged 0.18 microgram per liter. Samples were preserved by adding 60 grams of sodium chloride and stored at 4 degrees Celsius. The laboratory established a sample holding-time limit prior to sample extraction of 14 days from the date of collection.

  12. SU-F-T-536: Contra-Lateral Breast Study for Prone Versus Supine Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marrero, M; Joseph, K; Klein, E

    Purpose: There are several advantages to utilizing the prone technique for intact breast cancer patients. However, as the topography changes, accompanied by the influence of a supporting breast board and patient treatment couch, the question that arises is to whether there is a concern for contralateral breast dose for intact breast cancer patients being treated with this technique. Methods: An anthropomorphic phantom with breast mounds to duplicate intact breast cancer treatment was planned in prone and supine position. Two tangential beams were executed in the similar manner for as the radiotherapy planning system. For the prone setup, a breast densemore » foam board was used to support the phantom. A grid of 24 OSL nanodots was placed at 6cm, 4cm, and 2cm apart from the medial border for both prone and supine setups. The phantom was set up using megavoltage imaging and treated as per plan. Additional, a similar study was performed on a patient treated in prone position. Results: Overall, the contralateral breast dose was generally higher for prone setups at all locations especially when close to the medial border. The average mean dose was found to be 1.8%, 2.5% of the prescribed dose for supine respectively prone position. The average of the standard deviation is 1.04%, 1.38% for supine respectively prone position. As for patient treated in prone position average mean dose was found to be 1.165% of the prescribed dose and average of the standard deviation is 9.456%. Conclusion: There is minimal influence of scatter from the breast board. It appears that the volatility of the setup could lead to higher doses than expected from the planning system to the contralateral breast when the patient is in the prone position.« less

  13. Statistical performance of image cytometry for DNA, lipids, cytokeratin, & CD45 in a model system for circulation tumor cell detection.

    PubMed

    Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A

    2017-07-01

    Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  14. Development of a 100 nmol mol(-1) propane-in-air SRM for automobile-exhaust testing for new low-emission requirements.

    PubMed

    Rhoderick, George C

    2007-04-01

    New US federal low-level automobile emission requirements, for example zero-level-emission vehicle (ZLEV), for hydrocarbons and other species, have resulted in the need by manufacturers for new certified reference materials. The new emission requirement for hydrocarbons requires the use, by automobile manufacturing testing facilities, of a 100 nmol mol(-1) propane in air gas standard. Emission-measurement instruments are required, by federal law, to be calibrated with National Institute of Standards and Technology (NIST) traceable reference materials. Because a NIST standard reference material (SRM) containing 100 nmol mol(-1) propane was not available, the US Environmental Protection Agency (EPA) and the Automobile Industry/Government Emissions Research Consortium (AIGER) requested that NIST develop such an SRM. A cylinder lot of 30 gas mixtures containing 100 nmol mol(-1) propane in air was prepared in 6-L aluminium gas cylinders by a specialty gas company and delivered to the Gas Metrology Group at NIST. Another mixture, contained in a 30-L aluminium cylinder and included in the lot, was used as a lot standard (LS). Using gas chromatography with flame-ionization detection all 30 samples were compared to the LS to obtain the average of six peak-area ratios to the LS for each sample with standard deviations of <0.31%. The average sample-to-LS ratio determinations resulted in a range of 0.9828 to 0.9888, a spread of 0.0060, which corresponds to a relative standard deviation of 0.15% of the average for all 30 samples. NIST developed its first set of five propane in air primary gravimetric standards covering a concentration range 91 to 103 nmol mol(-1) with relative uncertainties of 0.15%. This new suite of propane gravimetric standards was used to analyze and assign a concentration value to the SRM LS. On the basis of these data each SRM sample was individually certified, furnishing the desired relative expanded uncertainty of +/-0.5%. Because automobile companies use total hydrocarbons to make their measurements, it was also vital to assign a methane concentration to the SRM samples. Some of the SRM samples were analyzed and found to contain 1.2 nmol mol(-1) methane. Twenty-five of the samples were certified and released as SRM 2765.

  15. Plantar pressure cartography reconstruction from 3 sensors.

    PubMed

    Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc

    2014-01-01

    Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.

  16. Computations of unsteady multistage compressor flows in a workstation environment

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen L.

    1992-01-01

    High-end graphics workstations are becoming a necessary tool in the computational fluid dynamics environment. In addition to their graphic capabilities, workstations of the latest generation have powerful floating-point-operation capabilities. As workstations become common, they could provide valuable computing time for such applications as turbomachinery flow calculations. This report discusses the issues involved in implementing an unsteady, viscous multistage-turbomachinery code (STAGE-2) on workstations. It then describes work in which the workstation version of STAGE-2 was used to study the effects of axial-gap spacing on the time-averaged and unsteady flow within a 2 1/2-stage compressor. The results included time-averaged surface pressures, time-averaged pressure contours, standard deviation of pressure contours, pressure amplitudes, and force polar plots.

  17. Appraisal of application possibilities of smoothed splines to designation of the average values of terrain curvatures measured after the termination of hard coal exploitation conducted at medium depth

    NASA Astrophysics Data System (ADS)

    Orwat, J.

    2018-01-01

    In paper were presented results of average values calculations of terrain curvatures measured after the termination of subsequent exploitation stages in the 338/2 coal bed located at medium depth. The curvatures were measured on the neighbouring segments of measuring line No. 1 established perpendicularly to the runways of four longwalls No. 001, 002, 005 and 007. The average courses of measured curvatures were designated based on average courses of measured inclinations. In turn, the average values of observed inclinations were calculated on the basis of measured subsidence average values. In turn, they were designated on the way of average-square approximation, which was done by the use of smoothed splines, in reference to the theoretical courses determined by the S. Knothe’s and J. Bialek’s formulas. Here were used standard parameters values of a roof rocks subsidence a, an exploitation rim Aobr and an angle of the main influences range β. The values of standard deviations between the average and measured curvatures σC and the variability coefficients of random scattering of curvatures MC were calculated. They were compared with values appearing in the literature and based on this, a possibility appraisal of the use of smooth splines to designation of average course of observed curvatures of mining area was conducted.

  18. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  19. The Efficiency of a Selective Training Program on the Development of Some Social Skills of Saudi Students with Autism

    ERIC Educational Resources Information Center

    Alothman, Ibrahim A.

    2016-01-01

    The objective of the present study is to find out the efficiency of a selective training program on the development of some social skills of Saudi students with Autism. The study sample comprised of (6) male students with Autism who aged (9-12) years, with an average age of (10.58) years, and a standard deviation of (1.16) years. Their IQ ranged…

  20. Residual-Mean Analysis of the Air-Sea Fluxes and Associated Oceanic Meridional Overturning

    DTIC Science & Technology

    2006-12-01

    the adiabatic component of the MOC which is based entirely on the sea surface data . The coordinate system introduced in this study is somewhat...heat capacity of water. The technique utilizes the observational data based on meteorological re- analysis (density flux at the sea surface) and...Figure 8. Annual mean and temporal standard deviation of the zonally-averaged mixed- layer depth. The plotted data are based on Levitus 94 climatology

  1. Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island

    NASA Astrophysics Data System (ADS)

    E Komalasari, K.; Pawitan, H.; Faqih, A.

    2017-03-01

    This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.

  2. Low dose of rectal thiopental sodium for pediatric sedation in spiral computed tomography study.

    PubMed

    Akhlaghpoor, Shahram; Shabestari, Abbas Arjmand; Moghdam, Mohsen Shojaei

    2007-06-01

    The aim of this study was to determine the effectiveness of reduced new dose in rectal sedation by thiopental sodium for computed tomography (CT) diagnostic imaging. A total of 90 children (mean age, 24.21 month +/- 13.63 [standard deviation]) underwent spiral CT study after rectal administration of thiopental sodium injection solution. The new dose ranged from 15 to 25 mg/kg with a total dose of 350 mg. The percentage of success and adverse reaction were evaluated. Sedation was successful in 98% of infants and children with an average time of 8.04 min +/- 6.87 (standard deviation). One of the cases found desaturation, two experienced vomiting, 14 found rectal defecation, and two experienced hyperactivity. No prolonged sedation was observed. Rectal administration of thiopental sodium for pediatric CT imaging is safe and effective even for hyperextend position by new reduced dose of the drug. This procedure could be easily done in the CT department under supervision of the radiologist.

  3. Assessing the stock market volatility for different sectors in Malaysia by using standard deviation and EWMA methods

    NASA Astrophysics Data System (ADS)

    Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd

    2017-11-01

    Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.

  4. Determination of patulin in apple juice by liquid chromatography: collaborative study.

    PubMed

    Brause, A R; Trucksess, M W; Thomas, F S; Page, S W

    1996-01-01

    An AOAC International-International Union of Pure and Applied Chemistry-International Fruit Juice Union (AOAC-IUPAC-IFJU) collaborative study was conducted to evaluate a liquid chromatographic (LC) procedure for determination of patulin in apple juice. Patulin is a mold metabolite found naturally in rotting apples. Patulin is extracted with ethyl acetate, treated with sodium carbonate solution, and determined by reversed-phase LC with UV detection at 254 or 276 nm. Water, water-tetrahydrofuran, or water-acetonitrile was used as mobile phase. Levels determined in spiked test samples were 20, 50, 100, and 200 micrograms/L. A test sample naturally contaminated at 31 micrograms/L was also included. Twenty-two collaborators in 10 countries analyzed 12 test samples of apple juice. Recoveries averaged 96%, with a range of 91-108%. Repeatability relative standard deviations (RSDr) ranged from 10.9 to 53.8%. The reproducibility relative standard deviation (RSDR) ranged from 15.1 to 68.8%. The LC method for determination of patulin in apple juice has been adopted first action by AOAC INTERNATIONAL.

  5. Statistical behavior of post-shock overpressure past grid turbulence

    NASA Astrophysics Data System (ADS)

    Sasoh, Akihiro; Harasaki, Tatsuya; Kitamura, Takuya; Takagi, Daisuke; Ito, Shigeyoshi; Matsuda, Atsushi; Nagata, Kouji; Sakai, Yasuhiko

    2014-09-01

    When a shock wave ejected from the exit of a 5.4-mm inner diameter, stainless steel tube propagated through grid turbulence across a distance of 215 mm, which is 5-15 times larger than its integral length scale , and was normally incident onto a flat surface; the peak value of post-shock overpressure, , at a shock Mach number of 1.0009 on the flat surface experienced a standard deviation of up to about 9 % of its ensemble average. This value was more than 40 times larger than the dynamic pressure fluctuation corresponding to the maximum value of the root-mean-square velocity fluctuation, . By varying and , the statistical behavior of was obtained after at least 500 runs were performed for each condition. The standard deviation of due to the turbulence was almost proportional to . Although the overpressure modulations at two points 200 mm apart were independent of each other, we observed a weak positive correlation between the peak overpressure difference and the relative arrival time difference.

  6. Correlation of processing and sintering variables with the strength and radiography of silicon nitride

    NASA Technical Reports Server (NTRS)

    Sanders, W. A.; Baaklini, G. Y.

    1986-01-01

    A sintered Si3N4-SiO2-Y2O3 composition, NASA 6Y, was developed that reached four-point flexural average strength/standard deviation values of 857/36, 544/33, and 462/59 MPa at room temperature, 1200 and 1370 C respectively. These strengths represented improvements of 56, 38, and 21 percent over baseline properties at the three test temperatures. At room temperature the standard deviation was reduced by over a factor of three. These accomplishments were realized by the iterative utilization of conventional x-radiography to characterize structural (density) uniformity as affected by systematic changes in powder processing and sintering parameters. Accompanying the improvement in mechanical properties was a change in the type of flaw causing failure from a pore to a large columnar beta- Si3N4 grain typically 40 to 80 microns long, 10 to 30 microns wide, and with an aspect ratio of 5:1.

  7. Slant path L- and S-Band tree shadowing measurements

    NASA Technical Reports Server (NTRS)

    Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1994-01-01

    This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.

  8. Slant path L- and S-Band tree shadowing measurements

    NASA Astrophysics Data System (ADS)

    Vogel, Wolfhard J.; Torrence, Geoffrey W.

    1994-08-01

    This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.

  9. Strain accumulation and rotation in western Oregon and southwestern Washington

    USGS Publications Warehouse

    Svarc, J.L.; Savage, J.C.; Prescott, W.H.; Murray, M.H.

    2002-01-01

    Velocities of 75 geodetic monuments in western Oregon and southwestern Washington extending from the coast to more than 300 km inland have been determined from GPS surveys over the interval 1992-2000. The average standard deviation in each of the horizontal velocity components is ??? 1 mm yr-1. The observed velocity field is approximated by a combination of rigid rotation (Euler vector relative to interior North America: 43. 40??N ?? 0.14??, 119.33??W ?? 0.28??, and 0.822 ?? 0.057?? Myr-1 clockwise; quoted uncertainties are standard deviations), uniform regional strain rate (??EE = -7.4 ?? 1.8, ??EN = -3.4 ?? 1.0, and ??NN = -5.0 ?? 0.8 nstrain yr-1, extension reckoned positive), and a dislocation model representing subduction of the Juan de Fuca plate beneath North America. Subduction south of 44.5??N was represented by a 40-km-wide locked thrust and subduction north of 44.5??N by a 75-km-wide locked thrust.

  10. Teacher expectations, classroom context, and the achievement gap.

    PubMed

    McKown, Clark; Weinstein, Rhona S

    2008-06-01

    In two independent datasets with 1872 elementary-aged children in 83 classrooms, Studies 1 and 2 examined the role of classroom context in moderating the relationship between child ethnicity and teacher expectations. For Study 1 overall and Study 2 mixed-grade classrooms, in ethnically diverse classrooms where students reported high levels of differential teacher treatment (PDT) towards high and low achieving students, teacher expectations of European American and Asian American students were between .75 and 1.00 standard deviations higher than teacher expectations of African American and Latino students with similar records of achievement. In highly diverse low-PDT classrooms in Study 1 and highly diverse low-PDT mixed-grade classrooms in Study 2, teachers held similar expectations for all students with similar records of achievement. Study 3 estimated the contribution of teacher expectations to the year-end ethnic achievement gap in high- and low-bias classrooms. In high-bias classrooms, teacher expectancy effects accounted for an average of .29 and up to .38 standard deviations of the year-end ethnic achievement gap.

  11. SU-E-I-22: A Comprehensive Investigation of Noise Variations Between the GE Discovery CT750 HD and GE LightSpeed VCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, S; Loyer, E; Stauduhar, P

    2015-06-15

    Purpose: To quantify and compare the noise properties between two GE CT models-the Discovery CT750 HD (aka HD750) and LightSpeed VCT, with the overall goal of assessing the impact in clinical diagnostic practice. Methods: Daily QC data from a fleet of 9 CT scanners currently in clinical use were investigated – 5 HD750 and 4 VCT (over 600 total acquisitions for each scanner). A standard GE QC phantom was scanned daily using two sets of scan parameters with each scanner over 1 year. Water CT number and standard deviation were recorded from the image of water section of the QCmore » phantom. The standard GE QC scan parameters (Pitch = 0.516, 120kVp, 0.4s, 335mA, Small Body SFOV, 5mm thickness) and an in-house developed protocol (Axial, 120kVp, 1.0s, 240mA, Head SFOV, 5mm thickness) were used, with Standard reconstruction algorithm. Noise was measured as the standard deviation in the center of the water phantom image. Inter-model noise distributions and tube output in mR/mAs were compared to assess any relative differences in noise properties. Results: With the in-house protocols, average noise for the five HD750 scanners was ∼9% higher than the VCT scanners (5.8 vs 5.3). For the GE QC protocol, average noise with the HD750 scanners was ∼11% higher than with the VCT scanners (4.8 vs 4.3). This discrepancy in noise between the two models was found despite the tube output in mR/mAs being comparable with the HD750 scanners only having ∼4% lower output (8.0 vs 8.3 mR/mAs). Conclusion: Using identical scan protocols, average noise in images from the HD750 group was higher than that from the VCT group. This confirms feedback from an institutional radiologist’s feedback regarding grainier patient images from HD750 scanners. Further investigation is warranted to assess the noise texture and distribution, as well as clinical impact.« less

  12. Comparison of Profile Total Ozone from SBUV (v8.6) with GOME-Type and Ground-Based Total Ozone for a 16-Year Period (1996 to 2011)

    NASA Technical Reports Server (NTRS)

    Chiou, E. W.; Bhartia, P. K.; McPeters, R. D.; Loyola, D. G.; Coldewey-Egbers, M.; Fioletov, V. E.; Van Roozendael, M.; Spurr, R.; Lerot, C.; Frith, S. M.

    2014-01-01

    This paper describes the comparison of the variability of total column ozone inferred from the three independent multi-year data records, namely, (i) Solar Backscatter Ultraviolet Instrument (SBUV) v8.6 profile total ozone, (ii) GTO (GOME-type total ozone), and (iii) ground-based total ozone data records covering the 16-year overlap period (March 1996 through June 2011). Analyses are conducted based on area-weighted zonal means for 0-30degS, 0-30degN, 50-30degS, and 30-60degN. It has been found that, on average, the differences in monthly zonal mean total ozone vary between -0.3 and 0.8% and are well within 1 %. For GTO minus SBUV, the standard deviations and ranges (maximum minus minimum) of the differences regarding monthly zonal mean total ozone vary between 0.6-0.7% and 2.8-3.8% respectively, depending on the latitude band. The corresponding standard deviations and ranges regarding the differences in monthly zonal mean anomalies show values between 0.4-0.6% and 2.2-3.5 %. The standard deviations and ranges of the differences ground-based minus SBUV regarding both monthly zonal means and anomalies are larger by a factor of 1.4-2.9 in comparison to GTO minus SBUV. The ground-based zonal means demonstrate larger scattering of monthly data compared to satellite-based records. The differences in the scattering are significantly reduced if seasonal zonal averages are analyzed. The trends of the differences GTO minus SBUV and ground-based minus SBUV are found to vary between -0.04 and 0.1%/yr (-0.1 and 0.3DU/yr). These negligibly small trends have provided strong evidence that there are no significant time-dependent differences among these multiyear total ozone data records. Analyses of the annual deviations from pre-1980 level indicate that, for the 15-year period of 1996 to 2010, all three data records show a gradual increase at 30-60degN from -5% in 1996 to -2% in 2010. In contrast, at 50-30degS and 30degS- 30degN there has been a leveling off in the 15 years after 1996. The deviations inferred from GTO and SBUV show agreement within 1 %, but a slight increase has been found in the differences during the period 1996-2010.

  13. Creation of three-dimensional craniofacial standards from CBCT images

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  14. Patient-specific targeting guides compared with traditional instrumentation for glenoid component placement in shoulder arthroplasty: a multi-surgeon study in 70 arthritic cadaver specimens.

    PubMed

    Throckmorton, Thomas W; Gulotta, Lawrence V; Bonnarens, Frank O; Wright, Stephen A; Hartzell, Jeffrey L; Rozzi, William B; Hurst, Jason M; Frostick, Simon P; Sperling, John W

    2015-06-01

    The purpose of this study was to compare the accuracy of patient-specific guides for total shoulder arthroplasty (TSA) with traditional instrumentation in arthritic cadaver shoulders. We hypothesized that the patient-specific guides would place components more accurately than standard instrumentation. Seventy cadaver shoulders with radiographically confirmed arthritis were randomized in equal groups to 5 surgeons of varying experience levels who were not involved in development of the patient-specific guidance system. Specimens were then randomized to patient-specific guides based off of computed tomography scanning, standard instrumentation, and anatomic TSA or reverse TSA. Variances in version or inclination of more than 10° and more than 4 mm in starting point were considered indications of significant component malposition. TSA glenoid components placed with patient-specific guides averaged 5° of deviation from the intended position in version and 3° in inclination; those with standard instrumentation averaged 8° of deviation in version and 7° in inclination. These differences were significant for version (P = .04) and inclination (P = .01). Multivariate analysis of variance to compare the overall accuracy for the entire cohort (TSA and reverse TSA) revealed patient-specific guides to be significantly more accurate (P = .01) for the combined vectors of version and inclination. Patient-specific guides also had fewer instances of significant component malposition than standard instrumentation did. Patient-specific targeting guides were more accurate than traditional instrumentation and had fewer instances of component malposition for glenoid component placement in this multi-surgeon cadaver study of arthritic shoulders. Long-term clinical studies are needed to determine if these improvements produce improved functional outcomes. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  15. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  16. The Statistical Differences Between the Gridded Temperature Datasets, and its Implications for Stochastic Modelling

    NASA Astrophysics Data System (ADS)

    Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.

    2014-12-01

    Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.

  17. Meta-analysis of action video game impact on perceptual, attentional, and cognitive skills.

    PubMed

    Bediou, Benoit; Adams, Deanne M; Mayer, Richard E; Tipton, Elizabeth; Green, C Shawn; Bavelier, Daphne

    2018-01-01

    The ubiquity of video games in today's society has led to significant interest in their impact on the brain and behavior and in the possibility of harnessing games for good. The present meta-analyses focus on one specific game genre that has been of particular interest to the scientific community-action video games, and cover the period 2000-2015. To assess the long-lasting impact of action video game play on various domains of cognition, we first consider cross-sectional studies that inform us about the cognitive profile of habitual action video game players, and document a positive average effect of about half a standard deviation (g = 0.55). We then turn to long-term intervention studies that inform us about the possibility of causally inducing changes in cognition via playing action video games, and show a smaller average effect of a third of a standard deviation (g = 0.34). Because only intervention studies using other commercially available video game genres as controls were included, this latter result highlights the fact that not all games equally impact cognition. Moderator analyses indicated that action video game play robustly enhances the domains of top-down attention and spatial cognition, with encouraging signs for perception. Publication bias remains, however, a threat with average effects in the published literature estimated to be 30% larger than in the full literature. As a result, we encourage the field to conduct larger cohort studies and more intervention studies, especially those with more than 30 hours of training. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  19. Comparing Standard Deviation Effects across Contexts

    ERIC Educational Resources Information Center

    Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.

    2017-01-01

    Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…

  20. Quantifying nonergodicity in nonautonomous dissipative dynamical systems: An application to climate change

    NASA Astrophysics Data System (ADS)

    Drótos, Gábor; Bódai, Tamás; Tél, Tamás

    2016-08-01

    In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change.

  1. Special electronic distance meter calibration for precise engineering surveying industrial applications

    NASA Astrophysics Data System (ADS)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf

    2015-05-01

    All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.

  2. Relationship between platelet count and hemodialysis membranes

    PubMed Central

    Nasr, Rabih; Saifan, Chadi; Barakat, Iskandar; Azzi, Yorg Al; Naboush, Ali; Saad, Marc; Sayegh, Suzanne El

    2013-01-01

    Background One factor associated with poor outcomes in hemodialysis patients is exposure to a foreign membrane. Older membranes are very bioincompatible and increase complement activation, cause leukocytosis by activating circulating factors, which sequesters leukocytes in the lungs, and activates platelets. Recently, newer membranes have been developed that were designed to be more biocompatible. We tested if the different “optiflux” hemodialysis membranes had different effects on platelet levels. Methods Ninety-nine maintenance hemodialysis patients with no known systemic or hematologic diseases affecting their platelets had blood drawn immediately prior to, 90 minutes into, and immediately following their first hemodialysis session of the week. All patients were dialyzed using a Fresenius Medical Care Optiflux polysulfone membrane F160, F180, or F200 (polysulfone synthetic dialyzer membranes, 1.6 m2, 1.8 m2, and 2.0 m2 surface area, respectively, electron beam sterilized). Platelet counts were measured from each sample by analysis using a CBC analyzer. Results The average age of the patients was 62.7 years; 36 were female and 63 were male. The mean platelet count pre, mid, and post dialysis was 193 (standard deviation ±74.86), 191 (standard deviation ±74.67), and 197 (standard deviation ±79.34) thousand/mm3, respectively, with no statistical differences. Conclusion Newer membranes have no significant effect on platelet count. This suggests that they are, in fact, more biocompatible than their predecessors and may explain their association with increased survival. PMID:23983482

  3. Discovery of Finely Structured Dynamic Solar Corona Observed in the Hi-C Telescope

    NASA Technical Reports Server (NTRS)

    Winebarger, A.; Cirtain, J.; Golub, L.; DeLuca, E.; Savage, S.; Alexander, C.; Schuler, T.

    2014-01-01

    In the summer of 2012, the High-resolution Coronal Imager (Hi-C) flew aboard a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to be smoothly varying, i.e. have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70 percent of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.

  4. DISCOVERY OF FINELY STRUCTURED DYNAMIC SOLAR CORONA OBSERVED IN THE Hi-C TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Cirtain, Jonathan; Savage, Sabrina

    In the Summer of 2012, the High-resolution Coronal Imager (Hi-C) flew on board a NASA sounding rocket and collected the highest spatial resolution images ever obtained of the solar corona. One of the goals of the Hi-C flight was to characterize the substructure of the solar corona. We therefore examine how the intensity scales from AIA resolution to Hi-C resolution. For each low-resolution pixel, we calculate the standard deviation in the contributing high-resolution pixel intensities and compare that to the expected standard deviation calculated from the noise. If these numbers are approximately equal, the corona can be assumed to bemore » smoothly varying, i.e., have no evidence of substructure in the Hi-C image to within Hi-C's ability to measure it given its throughput and readout noise. A standard deviation much larger than the noise value indicates the presence of substructure. We calculate these values for each low-resolution pixel for each frame of the Hi-C data. On average, 70% of the pixels in each Hi-C image show no evidence of substructure. The locations where substructure is prevalent is in the moss regions and in regions of sheared magnetic field. We also find that the level of substructure varies significantly over the roughly 160 s of the Hi-C data analyzed here. This result indicates that the finely structured corona is concentrated in regions of heating and is highly time dependent.« less

  5. The gait standard deviation, a single measure of kinematic variability.

    PubMed

    Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren

    2016-05-01

    Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  7. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  8. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-11-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  9. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-09-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  10. Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April

    NASA Astrophysics Data System (ADS)

    Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.

    1989-07-01

    The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.

  11. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  12. The study of trace metal absoption using stable isotopes and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Fennessey, P. V.; Lloyd-Kindstrand, L.; Hambidge, K. M.

    1991-12-01

    The absorption and excretion of zinc stable isotopes have been followed in more than 120 human subjects. The isotope enrichment determinations were made using a standard VG 7070E HF mass spectrometer. A fast atom gun (FAB) was used to form the ions from a dry residue on a pure silver probe tip. Isotope ratio measurements were found to have a precision of better than 2% (relative standard deviation) and required a sample size of 1-5 [mu]g. The average true absorption of zinc was found to be 73 ± 12% (2[sigma]) when the metal was taken in a fasting state. This absorption figure was corrected for tracer that had been absorbed and secreted into the gastrointestinal (GI) tract over the time course of the study. The average time for a majority of the stable isotope tracer to pass through the GI tract was 4.7 ± 1.9 (2[sigma]) days.

  13. Birth weight standardized to gestational age and intelligence in young adulthood: a register-based birth cohort study of male siblings.

    PubMed

    Eriksen, Willy; Sundet, Jon M; Tambs, Kristian

    2010-09-01

    The authors aimed to determine the relation between birth-weight variations within the normal range and intelligence in young adulthood. A historical birth cohort study was conducted. Data from the Medical Birth Register of Norway were linked with register data from the Norwegian National Conscript Service. The sample comprised 52,408 sibships of full brothers who were born singletons at 37-41 completed weeks' gestation during 1967-1984 in Norway and were intelligence-tested at the time of mandatory military conscription. Generalized estimating equations were used to fit population-averaged panel data models. The analyses showed that in men with birth weights within the 10th-90th percentile range, a within-family difference of 1 standard deviation in birth weight standardized to gestational age was associated with a within-family difference of 0.07 standard deviation (99% confidence interval: 0.03, 0.09) in intelligence score, after adjustment for a range of background factors. There was no significant between-family association after adjustment for background factors. In Norwegian males, normal variations in intrauterine growth are associated with differences in intelligence in young adulthood. This association is probably not due to confounding by familial and parental characteristics.

  14. Serrated kiln sticks and top load substantially reduce warp in southern pine studs dried at 240°F

    Treesearch

    Peter Koch

    1974-01-01

    Sharply toothed aluminum kiln sticks pressed into 2 by 4's cut from veneer cores, with a clamping force of 50 to 200 pounds per stick-pair per stud, significantly reduced warp from that observed in matched studs stacked on smooth sticks with a top load of 10 pounds per stick-pair per stud. When dried in 24 hours to an average MC of 8.1 percent (standard deviation...

  15. Wind speed statistics for Goldstone, California, anemometer sites

    NASA Technical Reports Server (NTRS)

    Berg, M.; Levy, R.; Mcginness, H.; Strain, D.

    1981-01-01

    An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.

  16. Serrated kiln sticks and top load substantially reduce warp in southern pine studs dried at 240°F

    Treesearch

    P. Koch

    1974-01-01

    Sharply toothed luminum kiln sticks pressed into 2 by 4's cut from veneer cores, willi a clamping force of 50 to 200 pounds per stick-pair per stud, significantly reduced warp from that observed in matched studs stacked on smooth sticks with a top load of 10 pounds per stick-pair per stud. When dried in 24 hours to an average MC of 8.1 percent (standard deviation...

  17. Inflation Accounting Methods and their Effectiveness

    DTIC Science & Technology

    1992-06-01

    security is measured by the standard deviation of its returns in the past periods and is reflected in the security’ s market price . The Capital Asset Pricing ...purchasing power should be limited to items which are used by an average consumer. Economists tend to perceive the general price level as the cost of living...accounting. Two common measures of business performance are income and rate of return on capital . Since depreciation charges for long-lived assets do

  18. Impact of traffic oscillations on freeway crash occurrences.

    PubMed

    Zheng, Zuduo; Ahn, Soyoung; Monsere, Christopher M

    2010-03-01

    Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion. Published by Elsevier Ltd.

  19. Model and parametric uncertainty in source-based kinematic models of earthquake ground motion

    USGS Publications Warehouse

    Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur

    2011-01-01

    Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.

  20. Determination of chloroacetanilide herbicide metabolites in water using high-performance liquid chromatography-diode array detection and high-performance liquid chromatography/mass spectrometry

    USGS Publications Warehouse

    Hostetler, K.A.; Thurman, E.M.

    2000-01-01

    Analytical methods using high-performance liquid chromatography-diode array detection (HPLC-DAD) and high-performance liquid chromatography/mass spectrometry (HPLC/MS) were developed for the analysis of the following chloroacetanilide herbicide metabolites in water: alachlor ethanesulfonic acid (ESA); alachlor oxanilic acid; acetochlor ESA; acetochlor oxanilic acid; metolachlor ESA; and metolachlor oxanilic acid. Good precision and accuracy were demonstrated for both the HPLC-DAD and HPLC/MS methods in reagent water, surface water, and ground water. The average HPLC-DAD recoveries of the chloroacetanilide herbicide metabolites from water samples spiked at 0.25, 0.5 and 2.0 ??g/l ranged from 84 to 112%, with relative standard deviations of 18% or less. The average HPLC/MS recoveries of the metabolites from water samples spiked at 0.05, 0.2 and 2.0 ??g/l ranged from 81 to 118%, with relative standard deviations of 20% or less. The limit of quantitation (LOQ) for all metabolites using the HPLC-DAD method was 0.20 ??g/l, whereas the LOQ using the HPLC/MS method was at 0.05 ??g/l. These metabolite-determination methods are valuable for acquiring information about water quality and the fate and transport of the parent chloroacetanilide herbicides in water. Copyright (C) 2000 Elsevier Science B.V.

  1. Performance in physical examination on the USMLE Step 2 Clinical Skills examination.

    PubMed

    Peitzman, Steven J; Cuddy, Monica M

    2015-02-01

    To provide descriptive information about history-taking (HX) and physical examination (PE) performance for U.S. medical students as documented by standardized patients (SPs) during the Step 2 Clinical Skills (CS) component of the United States Medical Licensing Examination. The authors examined two hypotheses: (1) Students perform worse in PE compared with HX, and (2) for PE, students perform worse in the musculoskeletal system and neurology compared with other clinical domains. The sample included 121,767 student-SP encounters based on 29,442 examinees from U.S. medical schools who took Step 2 CS for the first time in 2011. The encounters comprised 107 clinical presentations, each categorized into one of five clinical domains: cardiovascular, gastrointestinal, musculoskeletal, neurological, and respiratory. The authors compared mean percent-correct scores for HX and PE via a one-tailed paired-samples t test and examined mean score differences by clinical domain using analysis of variance techniques. Average PE scores (59.6%) were significantly lower than average HX scores (78.1%). The range of scores for PE (51.4%-72.7%) was larger than for HX (74.4%-81.0%), and the standard deviation for PE scores (28.3) was twice as large as the HX standard deviation (14.7). PE performance was significantly weaker for musculoskeletal and neurological encounters compared with other encounters. U.S. medical students perform worse on PE than HX; PE performance was weakest in musculoskeletal and neurology clinical domains. Findings may reflect imbalances in U.S. medical education, but more research is needed to fully understand the relationships among PE instruction, assessment, and proficiency.

  2. The Assessment and Potential Implications of the Myocardial Performance Index Post Exercise in an at Risk Population.

    PubMed

    Ruisi, Michael; Levine, Michael; Finkielstein, Dennis

    2013-12-01

    The myocardial performance index (MPI) first described by Chuwa Tei in 1995 is a relatively new echocardiographic variable used for assessment of overall cardiac function. Previous studies have demonstrated the MPI to be a sum representation of both left ventricular systolic and diastolic function with prognostic value in patients with coronary artery disease as well as symptomatic heart failure. Ninety patients with either established coronary artery disease (CAD) or CAD risk factors underwent routine treadmill exercise stress testing with two-dimensional Doppler echocardiography using the standard Bruce protocol. Both resting and stress MPI values were measured for all 90 of the patients. Using a normal MPI cut off of ≤ 0.47, the prevalence of an abnormal resting MPI in our 90 subjects was 72/90 or 80% and the prevalence of an abnormal stress MPI in our 90 subjects was 48/90 or 53.33%. The average MPI observed in the resting portion of the stress test for the cohort was: 0.636 with a standard deviation of 0.182. The average MPI in the stress portion of the stress test for the cohort was 0.530 with a standard deviation of 0.250. The P value with the use of a one-tailed dependent T test was calculated to be < 0.05. We postulate that these findings reflect that the MPI (Tei) index assessed during exercise may be a sensitive indicator of occult coronary disease in an at risk group independent of wall motion assessment.

  3. Development of a benchmark factor to detect wrinkles in bending parts

    NASA Astrophysics Data System (ADS)

    Engel, Bernd; Zehner, Bernd-Uwe; Mathes, Christian; Kuhnhen, Christopher

    2013-12-01

    The rotary draw bending process finds special use in the bending of parts with small bending radii. Due to the support of the forming zone during the bending process, semi-finished products with small wall thicknesses can be bent. One typical quality characteristic is the emergence of corrugations and wrinkles at the inside arc. Presently, the standard for the evaluation of wrinkles is insufficient. The wrinkles' distribution along the longitudinal axis of the tube results in an average value [1]. An evaluation of the wrinkles is not carried out. Due to the lack of an adequate basis of assessment, coordination problems between customers and suppliers occur. They result from an imprecision caused by the lack of quantitative evaluability of the geometric deviations at the inside arc. The benchmark factor for the inside arc presented in this article is an approach to holistically evaluate the geometric deviations at the inside arc. The classification of geometric deviations is carried out according to the area of the geometric characteristics and the respective flank angles.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tittiranonda, P.; Burastero, S.; Shih, M.

    This study presents an evaluation of the Apple Adjustable Keyboard based on subjective preference and observed joint angles during typing. Thirty five keyboard users were asked to use the Apple adjustable keyboard for 7--14 days and rate the various characteristics of the keyboard. Our findings suggest that the most preferred opening angles range from 11--20{degree}. The mean ulnar deviation on the Apple Adjustable keyboard is 11{degree}, compared to 16{degree} on the standard keyboard. The mean extension was decreased from 24{degree} to 16{degree} when using the adjustable keyboard. When asked to subjectively rate the adjustable keyboard in comparison to the standard,more » the average subject felt that the Apple Adjustable Keyboard was more comfortable and easier to use than the standard flat keyboard.« less

  5. Plume particle collection and sizing from static firing of solid rocket motors

    NASA Technical Reports Server (NTRS)

    Sambamurthi, Jay K.

    1995-01-01

    A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.

  6. Passive correlation ranging of a geostationary satellite using DVB-S payload signals.

    NASA Astrophysics Data System (ADS)

    Shakun, Leonid; Shulga, Alexandr; Sybiryakova, Yevgeniya; Bushuev, Felix; Kaliuzhnyi, Mykola; Bezrukovs, Vladislavs; Moskalenko, Sergiy; Kulishenko, Vladislav; Balagura, Oleg

    2016-07-01

    Passive correlation ranging (PaCoRa) for geostationary satellites is now considered as an alternate to tone-ranging (https://artes.esa.int/search/node/PaCoRa). The PaCoRa method has been employed in the Research Institute "Nikolaev astronomical observatory" since the first experiment in August 2011 with two stations spatially separated on 150 km. The PaCoRa has been considered as an independent method for tracking the future Ukrainian geostationary satellite "Lybid'. Now a radio engineering complex (RC) for passive ranging consists of five spatially separated stations of receiving digital satellite television and a data processing center located in Mykolaiv. The stations are located in Kyiv, Kharkiv, Mukacheve, Mykolaiv (Ukraine) and in Ventspils (Latvia). Each station has identical equipment. The equipment allows making synchronous recording of fragments of the DVB-S signal from the quadrature detector output of a satellite television receiver. The fragments are recorded every second. Synchronization of the stations is performed using GPS receivers. Samples of the complex signal obtained in this way are archived and are sent to the data processing center over the Internet. Here the time differences of arrival (TDOA) for pairs of the stations are determined as a result of correlation processing of received signals. The values of the TDOA that measured every second are used for orbit determination (OD) of the satellite. The results of orbit determination of the geostationary telecommunication satellite "Eutelsat-13B" (13º East) obtained during about four months of observations in 2015 are presented in the report. The TDOA and OD accuracies are also given. Single-measurement error (1 sigma) of the TDOA is equal about 8.7 ns for all pairs of the stations. Standard deviations and average values of the residuals between the observed TDOA and the TDOA computed using the orbit elements obtained from optical measurements are estimated for the pairs Kharkiv-Mykolaiv and Mukacheve-Mykolaiv. The standard deviations do not exceed 10 ns for the both pairs and the average values are equal +10 ns and -106 ns respectively for Kharkiv-Mykolaiv and Mukacheve-Mykolaiv. We discuss the residuals between the observed TDOA and estimates of the TDOA that are calculated by fitted models of satellite motion: the SGP4/SDP4 model and the model based on the numerical integration of the equations of motion taking into account the geopotential, and the perturbation from the Moon and the Sun. We note that residuals from the model SGP4/SDP4 have periodic deviations due to the inaccuracy of the SGP4/SDP4 model. As a result, estimation of the standard deviation of the satellite position is about 60 m for the epoch of the SGP4/SDP4 orbit elements. The residuals for the numerical model in the interval of one day do not show low-frequency deviation. In this case, the estimate of the standard deviation of the satellite position is about 12 m for the epoch of the numerical orbit elements. Keywords. DVB-S, geostationary satellite, orbit determination, passive ranging.

  7. Updated U.S. population standard for the Veterans RAND 12-item Health Survey (VR-12).

    PubMed

    Selim, Alfredo J; Rogers, William; Fleishman, John A; Qian, Shirley X; Fincke, Benjamin G; Rothendler, James A; Kazis, Lewis E

    2009-02-01

    The purpose of this project was to develop an updated U.S. population standard for the Veterans RAND 12-item Health Survey (VR-12). We used a well-defined and nationally representative sample of the U.S. population from 52,425 responses to the Medical Expenditure Panel Survey (MEPS) collected between 2000 and 2002. We applied modified regression estimates to update the non-proprietary 1990 scoring algorithms. We applied the updated standard to the Medicare Health Outcomes Survey (HOS) to compute the VR-12 physical (PCS((MEPS standard))) and mental (MCS((MEPS standard))) component summaries based on the MEPS. We compared these scores to PCS and MCS based on the 1990 U.S. population standard. Using the updated U.S. population standard, the average VR-12 PCS((MEPS standard)) and MCS((MEPS standard)) scores in the Medicare HOS were 39.82 (standard deviation [SD] = 12.2) and 50.08 (SD = 11.4), respectively. For the same Medicare HOS, the average PCS and MCS scores based on the 1990 standard were 1.40 points higher and 0.99 points lower in comparison to VR-12 PCS and MCS, respectively. Changes in the U.S. population between 1990 and today make the old standard obsolete for the VR-12, so the updated standard developed here is widely available to serve as such a contemporary standard for future applications for health-related quality of life (HRQoL) assessments.

  8. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  9. Redshift drift in an inhomogeneous universe: averaging and the backreaction conjecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koksbang, S.M.; Hannestad, S., E-mail: koksbang@phys.au.dk, E-mail: sth@phys.au.dk

    2016-01-01

    An expression for the average redshift drift in a statistically homogeneous and isotropic dust universe is given. The expression takes the same form as the expression for the redshift drift in FLRW models. It is used for a proof-of-principle study of the effects of backreaction on redshift drift measurements by combining the expression with two-region models. The study shows that backreaction can lead to positive redshift drift at low redshifts, exemplifying that a positive redshift drift at low redshifts does not require dark energy. Moreover, the study illustrates that models without a dark energy component can have an average redshiftmore » drift observationally indistinguishable from that of the standard model according to the currently expected precision of ELT measurements. In an appendix, spherically symmetric solutions to Einstein's equations with inhomogeneous dark energy and matter are used to study deviations from the average redshift drift and effects of local voids.« less

  10. Transboundary atmospheric lead pollution.

    PubMed

    Erel, Yigal; Axelrod, Tamar; Veron, Alain; Mahrer, Yitzak; Katsafados, Petros; Dayan, Uri

    2002-08-01

    A high-temporal resolution collection technique was applied to refine aerosol sampling in Jerusalem, Israel. Using stable lead isotopes, lead concentrations, synoptic data, and atmospheric modeling, we demonstrate that lead detected in the atmosphere of Jerusalem is not only anthropogenic lead of local origin but also lead emitted in other countries. Fifty-seven percent of the collected samples contained a nontrivial fraction of foreign atmospheric lead and had 206Pb/207Pb values which deviated from the local petrol-lead value (206Pb/207Pb = 1.113) by more than two standard deviations (0.016). Foreign 206Pb/207Pb values were recorded in Jerusalem on several occasions. The synoptic conditions on these dates and reported values of the isotopic composition of lead emitted in various countries around Israel suggest that the foreign lead was transported to Jerusalem from Egypt, Turkey, and East Europe. The average concentration of foreign atmospheric lead in Jerusalem was 23 +/- 17 ng/m3, similar to the average concentration of local atmospheric lead, 21 +/- 18 ng/ m3. Hence, the load of foreign atmospheric lead is similar to the load of local atmospheric lead in Jerusalem.

  11. Back in the saddle: large-deviation statistics of the cosmic log-density field

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.

    2016-08-01

    We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.

  12. Implementation of small field radiotherapy dosimetry for spinal metastase case

    NASA Astrophysics Data System (ADS)

    Rofikoh, Wibowo, W. E.; Pawiro, S. A.

    2017-07-01

    The main objective of this study was to know dose profile of small field radiotherapy in the spinal metastase case with source axis distance (SAD) techniques. In addition, we evaluated and compared the dose planning of stereotactic body radiation therapy (SBRT) and conventional techniques to measurements with Exradin A16 and Gafchromic EBT3 film dosimeters. The results showed that film EBT3 had a highest precision and accuracy with the average of the standard deviation of ±1.7 and maximum discrepancy of 2.6 %. In addition, the average value of Full Wave Half Maximum (FWHM) and its largest deviation in small field size of 0.8 x 0.8 cm2 are 0.82 cm and 16.3 % respectively, while it was found around 2.36 cm and 3 % for the field size of 2.4 x 2.4 cm2. The comparison between penumbra width and the collimation was around of 37.1 % for the field size of 0.8 x 0.8 cm2, while it was found of 12.4 % for the field size of 2.4 x 2.4 cm2.

  13. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  14. [Study on the reproducibility of ACTH concentrations in plasma of horses with and without equine Cushing syndrome].

    PubMed

    Gehlen, Heidrun; Bradaric, Zrinkja

    2013-01-01

    The evaluation of plasma ACTH and the dexamethasone suppression test are considered the methods of choice to evaluate the course of therapy of pituitary pars intermedia dysfunction (PPID). Sampling protocols as well as vacutainers for analysis differ between the laboratories. To evaluate the reproducability of plasma ACTH measurement between four different laboratories (A, B, C, D) in Germany as well as within the laboratories themselves, ten horses with previously diagnosed PPID and four healthy horses were sampled and analyzed. Each laboratory received two differently labeled samples of each horse which had been drawn at the same time (blinded samples). Sampling was performed in the morning at the same time. The sampling vacutainers (with and without addition of coagulation and proteinase inhibitors) and postage of the samples was performed according to laboratory standards. In one laboratory the influence of the time of centrifugation (immediately after taking blood versus after one hour) was determined. The samples were processed and analyzed according to laboratory protocols. Determination of ACTH levels was performed using chemiluminescence immunoassay. In total 132 blood samples were analyzed. The results of doubled blood samples of the same horse showed a standard deviation ranging from +/- 6 to +/- 27 pg/ml within the laboratories (Ø 19,29 pg/ml). The standard deviation of the repeatability of the variation coefficient was 13,48%. Blood samples of the same horse resulted in ACTH levels of 121 pg/ml in the first probe and in < 5 pg/ml in the second probe. Standard deviation of measured ACTH values between the laboratories was +/- 26,4 pg/ml (Ø 27,44 pg/ml). The standard deviation of the reproducibility of the variation coefficient was 18,36%. In a 20 year old gelding the lowest ACTH value was 60.9 pg/ml whereas the highest measured value was 108 pg/ml. Immediate centrifugation of blood samples resulted in significantly higher ACTH values at an average of 11.6 pg/ml. The additional use of proteinase inhibitors (aprotinine) showed no influence on ACTH levels in this study.

  15. Exploring Students' Conceptions of the Standard Deviation

    ERIC Educational Resources Information Center

    delMas, Robert; Liu, Yan

    2005-01-01

    This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…

  16. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  17. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  18. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  19. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  20. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    ERIC Educational Resources Information Center

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  1. 7 CFR 801.4 - Tolerances for dockage testers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...

  2. 7 CFR 801.6 - Tolerances for moisture meters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...

  3. Measurement of stream channel habitat using sonar

    USGS Publications Warehouse

    Flug, Marshall; Seitz, Heather; Scott, John

    1998-01-01

    An efficient and low cost technique using a sonar system was evaluated for describing channel geometry and quantifying inundated area in a large river. The boat-mounted portable sonar equipment was used to record water depths and river width measurements for direct storage on a laptop computer. The field data collected from repeated traverses at a cross-section were evaluated to determine the precision of the system and field technique. Results from validation at two different sites showed average sample standard deviations (S.D.s) of 0.12 m for these complete cross-sections, with coefficient of variations of 10%. Validation using only the mid-channel river cross-section data yields an average sample S.D. of 0.05 m, with a coefficient of variation below 5%, at a stable and gauged river site using only measurements of water depths greater than 0.6 m. Accuracy of the sonar system was evaluated by comparison to traditionally surveyed transect data from a regularly gauged site. We observed an average mean squared deviation of 46.0 cm2, considering only that portion of the cross-section inundated by more than 0.6 m of water. Our procedure proved to be a reliable, accurate, safe, quick, and economic method to record river depths, discharges, bed conditions, and substratum composition necessary for stream habitat studies.

  4. Statistical evaluation of an inductively coupled plasma atomic emission spectrometric method for routine water quality testing

    USGS Publications Warehouse

    Garbarino, J.R.; Jones, B.E.; Stein, G.P.

    1985-01-01

    In an interlaboratory test, inductively coupled plasma atomic emission spectrometry (ICP-AES) was compared with flame atomic absorption spectrometry and molecular absorption spectrophotometry for the determination of 17 major and trace elements in 100 filtered natural water samples. No unacceptable biases were detected. The analysis precision of ICP-AES was found to be equal to or better than alternative methods. Known-addition recovery experiments demonstrated that the ICP-AES determinations are accurate to between plus or minus 2 and plus or minus 10 percent; four-fifths of the tests yielded average recoveries of 95-105 percent, with an average relative standard deviation of about 5 percent.

  5. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  6. Comparative 187Re-187Os systematics of chondrites: Implications regarding early solar system processes

    USGS Publications Warehouse

    Walker, R.J.; Horan, M.F.; Morgan, J.W.; Becker, H.; Grossman, J.N.; Rubin, A.E.

    2002-01-01

    A suite of 47 carbonaceous, enstatite, and ordinary chondrites are examined for Re-Os isotopic systematics. There are significant differences in the 187Re/188Os and 187Os/188Os ratios of carbonaceous chondrites compared with ordinary and enstatite chondrites. The average 187Re/188Os for carbonaceous chondrites is 0.392 ?? 0.015 (excluding the CK chondrite, Karoonda), compared with 0.422 ?? 0.025 and 0.421 ?? 0.013 for ordinary and enstatite chondrites (1?? standard deviations). These ratios, recast into elemental Re/Os ratios, are as follows: 0.0814 ?? 0.0031, 0.0876 ?? 0.0052 and 0.0874 ?? 0.0027 respectively. Correspondingly, the 187Os/188Os ratios of carbonaceous chondrites average 0.1262 ?? 0.0006 (excluding Karoonda), and ordinary and enstatite chondrites average 0.1283 ?? 0.0017 and 0.1281 ?? 0.0004, respectively (1?? standard deviations). The new results indicate that the Re/Os ratios of meteorites within each group are, in general, quite uniform. The minimal overlap between the isotopic compositions of ordinary and enstatite chondrites vs. carbonaceous chondrites indicates long-term differences in Re/Os for these materials, most likely reflecting chemical fractionation early in solar system history. A majority of the chondrites do not plot within analytical uncertainties of a 4.56-Ga reference isochron. Most of the deviations from the isochron are consistent with minor, relatively recent redistribution of Re and/or Os on a scale of millimeters to centimeters. Some instances of the redistribution may be attributed to terrestrial weathering; others are most likely the result of aqueous alteration or shock events on the parent body within the past 2 Ga. The 187Os/188Os ratio of Earth's primitive upper mantle has been estimated to be 0.1296 ?? 8. If this composition was set via addition of a late veneer of planetesimals after core formation, the composition suggests the veneer was dominated by materials that had Re/Os ratios most similar to ordinary and enstatite chondrites. ?? 2002 Elsevier Science Ltd.

  7. SU-C-BRD-06: Results From a 5 Patient in Vivo Rectal Wall Dosimetry Study Using Plastic Scintillation Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, L; Kudchadker, R; Lee, A

    Purpose: To evaluate the performance characteristics of plastic scintillation detectors (PSDs) in an in vivo environment for external beam radiation, and to establish the usefulness and ease of implementation of a PSD based in vivo dosimetry system for routine clinical use. Methods: A five patient IRB approved in vivo dosimetry study was performed. Five patients with prostate cancer were enrolled and PSDs were used to monitor rectal wall dose and verify the delivered dose for approximately two fractions each week over the course of their treatment (approximately fourteen fractions), resulting in a total of 142 in vivo measurements. A setmore » of two PSDs was fabricated for each patient. At each monitored fraction the PSDs were attached to the anterior surface of an endorectal balloon used to immobilize the patient's prostate during treatment. A CT scan was acquired with a CTon- rails linear accelerator to localize the detectors and to calculate the dose expected to be delivered to the detectors. Each PSD acquired data in 10 second intervals for the duration of the treatment. The deviation between expected and measured cumulative dose was calculated for each detector for each fraction, and averaged over each patient and the patient population as a whole. Results: The average difference between expected dose and measured dose ranged from -3.3% to 3.3% for individual patients, with standard deviations between 5.6% and 7.1% for four of the patients. The average difference for the entire population was -0.4% with a standard deviation of 2.8%. The detectors were well tolerated by the patients and the system did not interrupt the clinical workflow. Conclusion: PSDs perform well as in vivo dosimeters, exhibiting good accuracy and precision. This, combined with the practicability of using such a system, positions the PSD as a strong candidate for clinical in vivo dosimetry in the future. This work supported in part by the National Cancer Institute through an R01 grant (CA120198-01A2) and by the American Legion Auxiliary through the American Auxiliary Fellowship in Cancer Research.« less

  8. Production of NO2 from Photolysis of Peroxyacetyl Nitrate

    NASA Technical Reports Server (NTRS)

    Mazely, Troy L.; Friedl, Randall R.; Sander, Stanley P.

    1965-01-01

    Peroxyacetyl nitrate (PAN) vapor was photolyzed at 248 nm, and the NO2 photoproduct was detected by laser-induced fluorescence. The quantum yield for the production of NO2 from PAN photolysis was determined by comparison to HNO3 photolysis data taken under identical experimental conditions. The average of data collected over a range of total pressures, precursor concentrations, and buffer gases was 0.83 +/- 0.09 for the NO2 quantum yield, where the statistical uncertainty is 2 standard deviations.

  9. Assessing the Effectiveness of the Early Aberration Reporting System (EARS) for Early Event Detection of the H1N1 (Swine Flu) Virus

    DTIC Science & Technology

    2010-09-01

    Given that the sample standard deviation is based on the previous 7–9 days worth of data , it is no wonder that a 3 sigma threshold fails to signal...this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data ...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden

  10. Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  11. Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.

    1986-01-01

    Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.

  12. Viability of Commercially Available Bleach for Water Treatment in Developing Countries

    PubMed Central

    2009-01-01

    Treating household water with low-cost, widely available commercial bleach is recommended by some organizations to improve water quality and reduce disease in developing countries. I analyzed the chlorine concentration of 32 bleaches from 12 developing countries; the average error between advertised and measured concentration was 35% (range = –45%–100%; standard deviation = 40%). Because of disparities between advertised and actual concentration, the use of commercial bleach for water treatment in developing countries is not recommended without ongoing quality control testing. PMID:19762657

  13. Viability of commercially available bleach for water treatment in developing countries.

    PubMed

    Lantagne, Daniele S

    2009-11-01

    Treating household water with low-cost, widely available commercial bleach is recommended by some organizations to improve water quality and reduce disease in developing countries. I analyzed the chlorine concentration of 32 bleaches from 12 developing countries; the average error between advertised and measured concentration was 35% (range = -45%-100%; standard deviation = 40%). Because of disparities between advertised and actual concentration, the use of commercial bleach for water treatment in developing countries is not recommended without ongoing quality control testing.

  14. Is Survival Better at Hospitals With Higher “End-of-Life” Treatment Intensity?

    PubMed Central

    Barnato, Amber E.; Chang, Chung-Chou H.; Farrell, Max H.; Lave, Judith R.; Roberts, Mark S.; Angus, Derek C.

    2013-01-01

    Background Concern regarding wide variations in spending and intensive care unit use for patients at the end of life hinges on the assumption that such treatment offers little or no survival benefit. Objective To explore the relationship between hospital “end-of-life” (EOL) treatment intensity and postadmission survival. Research Design Retrospective cohort analysis of Pennsylvania Health Care Cost Containment Council discharge data April 2001 to March 2005 linked to vital statistics data through September 2005 using hospital-level correlation, admission-level marginal structural logistic regression, and pooled logistic regression to approximate a Cox survival model. Subjects A total of 1,021,909 patients ≥65 years old, incurring 2,216,815 admissions in 169 Pennsylvania acute care hospitals. Measures EOL treatment intensity (a summed index of standardized intensive care unit and life-sustaining treatment use among patients with a high predicted probability of dying [PPD] at admission) and 30- and 180-day postadmission mortality. Results There was a nonlinear negative relationship between hospital EOL treatment intensity and 30-day mortality among all admissions, although patients with higher PPD derived the greatest benefit. Compared with admission at an average intensity hospital, admission to a hospital 1 standard deviation below versus 1 standard deviation above average intensity resulted in an adjusted odds ratio of mortality for admissions at low PPD of 1.06 (1.04–1.08) versus 0.97 (0.96–0.99); average PPD: 1.06 (1.04–1.09) versus 0.97 (0.96–0.99); and high PPD: 1.09 (1.07–1.11) versus 0.97 (0.95– 0.99), respectively. By 180 days, the benefits to intensity attenuated (low PPD: 1.03 [1.01–1.04] vs. 1.00 [0.98–1.01]; average PPD: 1.03 [1.02–1.05] vs. 1.00 [0.98–1.01]; and high PPD: 1.06 [1.04–1.09] vs. 1.00 [0.98–1.02]), respectively. Conclusions Admission to higher EOL treatment intensity hospitals is associated with small gains in postadmission survival. The marginal returns to intensity diminish for admission to hospitals above average EOL treatment intensity and wane with time. PMID:20057328

  15. Visualizing the Sample Standard Deviation

    ERIC Educational Resources Information Center

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  16. [Challenges in building a surgical obesity center].

    PubMed

    Fischer, L; El Zein, Z; Bruckner, T; Hünnemeyer, K; Rudofsky, G; Reichenberger, M; Schommer, K; Gutt, C N; Büchler, M W; Müller-Stich, B P

    2014-04-01

    It is estimated that approximately 1 million adults in Germany suffer from grade III obesity. The aim of this article is to describe the challenges faced when constructing an operative obesity center. The inflow of patients as well as personnel and infrastructure of the interdisciplinary Diabetes and Obesity Center in Heidelberg were analyzed. The distribution of continuous data was described by mean values and standard deviation and analyzed using variance analysis. The interdisciplinary Diabetes and Obesity Center in Heidelberg was founded in 2006 and offers conservative therapeutic treatment and all currently available operative procedures. For every operative intervention carried out an average of 1.7 expert reports and 0.3 counter expertises were necessary. The time period from the initial presentation of patients in the department of surgery to an operation was on average 12.8 months (standard deviation SD ± 4.5 months). The 47 patients for whom remuneration for treatment was initially refused had an average body mass index (BMI) of 49.2 kg/m(2) and of these 39 had at least the necessity for treatment of a comorbidity. Of the 45 patients for whom the reason for the refusal of treatment costs was given as a lack of conservative treatment, 30 had undertaken a medically supervised attempt at losing weight over at least 6 months. Additionally, 19 of these patients could document participation in a course at a rehabilitation center, a Xenical® or Reduktil® therapy or had undertaken the Optifast® program. For the 20 patients who supposedly lacked a psychosomatic evaluation, an adequate psychosomatic evaluation was carried out in all cases. The establishment of an operative obesity center can last for several years. A essential prerequisite for success seems to be the constructive and targeted cooperation with the health insurance companies.

  17. Calibrated Color and Albedo Maps of Mercury

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Lucey, P. G.

    1996-03-01

    In order to determine the albedo and color of the mercurian surface, we are completing calibrated mosaics of Mariner 10 image data. A set of clear filter mosaics is being compiled in such a way as to maximize the signal-to-noise-ratio of the data and to allow for a quantitative measure of the precision of the data on a pixel-by-pixel basis. Three major imaging sequences of Mercury were acquired by Mariner 10: incoming first encounter (centered at 20S, 2E), outgoing first encounter (centered at 20N, 175E), and southern hemisphere second encounter (centered at 40S, 100E). For each sequence we are making separate mosaics for each camera (A and B) in order to have independent measurements. For each mosaic, regions of overlap from frame-to-frame are being averaged and the attendant standard deviations are being calculated. Due to the highly redundant nature of the data, each pixel in each mosaic will be an average calculated from 1-10 images. Each mosaic will have a corresponding standard deviation and n (number of measurements) map. A final mosaic will be created by averaging the six independent mosaics. This procedure lessens the effects of random noise and calibration residuals. From these data an albedo map will be produced using an improved photometric function for the Moon. A similar procedure is being followed for the lower resolution color sequences (ultraviolet, blue, orange, ultraviolet polarized). These data will be calibrated to absolute units through comparison of Mariner 10 images acquired of the Moon and Jupiter. Spectral interpretation of these new color and albedo maps will be presented with an emphasis on comparison with the Moon.

  18. Characteristics of nocturnal coastal boundary layer in Ahtopol based on averaged SODAR profiles

    NASA Astrophysics Data System (ADS)

    Barantiev, Damyan; Batchvarova, Ekaterina; Novitzky, Mikhail

    2014-05-01

    The ground-based remote sensing instruments allow studying the wind regime and the turbulent characteristics of the atmosphere with height, achieving new knowledge and solving practical problems, such as air quality assessments, mesoscale models evaluation with high resolution data, characterization of the exchange processes between the surface and the atmosphere, the climate comfort conditions and the risk for extreme events, etc. Very important parameter in such studies is the height of the atmospheric boundary layer. Acoustic remote sensing data of the coastal atmospheric boundary layer were explored based on over 4-years continuous measurements at the meteorological observatory of Ahtopol (Bulgarian Southern Black Sea Coast) under Bulgarian - Russian scientific agreement. Profiles of 12 parameters from a mid-range acoustic sounding instrument type SCINTEC MFAS are derived and averaged up to about 600 m according filtering based on wind direction (land or sea type of night fowls). From the whole investigated period of 1454 days with 10-minute resolution SODAR data 2296 profiles represented night marine air masses and 1975 profiles represented the night flow from land during the months May to September. Graphics of averaged profiles of 12 SODAR output parameters with different availability of data in height are analyzed for both cases. A marine boundary-layer height of about 300 m is identified in the profiles of standard deviation of vertical wind speed (σw), Turbulent Kinetic Energy (TKE) and eddy dissipation rate (EDR). A nocturnal boundary-layer height of about 420 m was identified from the profiles of the same parameters under flows from land condition. In addition, the Buoyancy Production (BP= σw3/z) profiles were calculated from the standard deviation of the vertical wind speed and the height z above ground.

  19. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  20. Development of a frequency regulation duty-cycle for standardized energy storage performance testing

    DOE PAGES

    Rosewater, David; Ferreira, Summer

    2016-05-25

    The US DOE Protocol for uniformly measuring and expressing the performance of energy storage systems, first developed in 2012 through inclusive working group activities, provides standardized methodologies for evaluating an energy storage system’s ability to supply specific services to electrical grids. This article elaborates on the data and decisions behind the duty-cycle used for frequency regulation in this protocol. Analysis of a year of publicly available frequency regulation control signal data from a utility was considered in developing the representative signal for this use case. Moreover, this showed that signal standard deviation can be used as a metric for aggressivenessmore » or rigor. From these data, we select representative 2 h long signals that exhibit nearly all of dynamics of actual usage under two distinct regimens, one for average use and the other for highly aggressive use. Our results were combined into a 24-h duty-cycle comprised of average and aggressive segments. The benefits and drawbacks of the selected duty-cycle are discussed along with its potential implications to the energy storage industry.« less

  1. Down-Looking Interferometer Study II, Volume I,

    DTIC Science & Technology

    1980-03-01

    g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system

  2. 40 CFR 61.207 - Radium-226 sampling and measurement procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...

  3. Climate change and the detection of trends in annual runoff

    USGS Publications Warehouse

    McCabe, G.J.; Wolock, D.M.

    1997-01-01

    This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.

  4. An investigation into the impact of question structure on the performance of first year physics undergraduate students at the University of Cambridge

    NASA Astrophysics Data System (ADS)

    Gibson, Valerie; Jardine-Wright, Lisa; Bateman, Elizabeth

    2015-07-01

    We describe a study of the impact of exam question structure on the performance of first year Natural Sciences physics undergraduates from the University of Cambridge. The results show conclusively that a student’s performance improves when questions are scaffolded compared with university style questions. In a group of 77 female students we observe that the average exam mark increases by 13.4% for scaffolded questions, which corresponds to a 4.9 standard deviation effect. The equivalent observation for 236 male students is 9% (5.5 standard deviations). We also observe a correlation between exam performance and A2-level marks for UK students, and that students who receive their school education overseas, in a mixed gender environment, or at an independent school are more likely to receive a first class mark in the exam. These results suggest a mis-match between the problem-solving skills and assessment procedures between school and first year university and will provide key input into the future teaching and assessment of first year undergraduate physics students.

  5. Microwave-assisted rapid preparation of monodisperse superhydrophilic resin microspheres as adsorbent for triazines in fruit juices.

    PubMed

    Zhou, Tianyu; Ding, Jie; Wang, Qiang; Xu, Yuan; Wang, Bo; Zhao, Li; Ding, Hong; Chen, Yanhua; Ding, Lan

    2018-03-01

    Monodisperse superhydrophilic melamine formaldehyde resorcinol resin (MFR) microspheres were prepared in 90min at 85°C via a microwave-assisted method with a yield of 60.6%. The obtained MFR microspheres exhibited narrow size distribution with the average particle size of about 2.5µm. The MFR microspheres were used as absorbents to detect triazines in juices followed by high performance liquid chromatography tandem mass spectrometry. Various factors affecting the extraction efficiency were investigated. Under the optimized conditions, the built method exhibited excellent linearity in the range of 1-250μgL -1 (R 2 ≥ 0.9994) and lower detection limits (0.3-0.65μgL -1 ). The relative standard deviations of intra- and inter-day analyses ranged from 3% to 7% and from 2% to 7%, respectively. The method was applied to determine six triazines in three juice samples. At the spiked level of 3μgL -1 , the recoveries were in the range of 90-99% with the relative standard deviations ≤ 8%. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Formation of low resistance ohmic contacts in GaN-based high electron mobility transistors with BCl3 surface plasma treatment

    NASA Astrophysics Data System (ADS)

    Fujishima, Tatsuya; Joglekar, Sameer; Piedra, Daniel; Lee, Hyung-Seok; Zhang, Yuhao; Uedono, Akira; Palacios, Tomás

    2013-08-01

    A BCl3 surface plasma treatment technique to reduce the resistance and to increase the uniformity of ohmic contacts in AlGaN/GaN high electron mobility transistors with a GaN cap layer has been established. This BCl3 plasma treatment was performed by an inductively coupled plasma reactive ion etching system under conditions that prevented any recess etching. The average contact resistances without plasma treatment, with SiCl4, and with BCl3 plasma treatment were 0.34, 0.41, and 0.17 Ω mm, respectively. Also, the standard deviation of the ohmic contact resistance with BCl3 plasma treatment was decreased. This decrease in the standard deviation of contact resistance can be explained by analyzing the surface condition of GaN with x-ray photoelectron spectroscopy and positron annihilation spectroscopy. We found that the proposed BCl3 plasma treatment technique can not only remove surface oxide but also introduce surface donor states that contribute to lower the ohmic contact resistance.

  7. Monte Carlo studies of thermalization of electron-hole pairs in spin-polarized degenerate electron gas in monolayer graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2018-02-01

    Monte Carlo method is applied to the study of relaxation of excited electron-hole (e-h) pairs in graphene. The presence of background of spin-polarized electrons, with high density imposing degeneracy conditions, is assumed. To such system, a number of e-h pairs with spin polarization parallel or antiparallel to the background is injected. Two stages of relaxation: thermalization and cooling are clearly distinguished when average particles energy < E> and its standard deviation σ _E are examined. At the very beginning of thermalization phase, holes loose energy to electrons, and after this process is substantially completed, particle distributions reorganize to take a Fermi-Dirac shape. To describe the evolution of < E > and σ _E during thermalization, we define characteristic times τ _ {th} and values at the end of thermalization E_ {th} and σ _ {th}. The dependence of these parameters on various conditions, such as temperature and background density, is presented. It is shown that among the considered parameters, only the standard deviation of electrons energy allows to distinguish between different cases of relative spin polarizations of background and excited electrons.

  8. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  9. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  10. Pricing and hedging derivative securities with neural networks: Bayesian regularization, early stopping, and bagging.

    PubMed

    Gençay, R; Qi, M

    2001-01-01

    We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.

  11. Current organ allocation disadvantages kidney alone recipients over combined organ recipients.

    PubMed

    Martin, Michael S; Hagan, Michael E; Granger, Darla K

    2016-03-01

    The United Network for Organ Sharing began including the Kidney Donor Profile Index (KDPI) March 26, 2012 and began a new allocation scheme December 1, 2014. Kidney donors from our organ procurement organization from March 2012 to December 2014 were reviewed. The KDPIs of all 919 kidney only transplants were compared with all 102 kidney/extrarenal transplants. The average KDPI for kidney alone allografts was 47 (range 1 to 100) (standard deviation = 25.83) vs 27 for kidney/extrarenal kidneys (range 1 to 82) (standard deviation = 20.16) (P < .001, t test). Multivariate analysis including in- vs out-of-state recipient, donor body mass index, and donation after cardiac death vs brain-dead donor showed significantly lower KDPI for kidney/extrarenal transplants. Kidney/extrarenal organs have decreased graft survival compared with kidneys transplanted alone. In this sample, 21% of lower KDPI kidneys were allocated as kidney/extrarenal organs. This disadvantages those waiting for a kidney alone. Attention to the outcomes of kidneys transplanted with extrarenal organs is needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Using Acoustic Structure Quantification During B-Mode Sonography for Evaluation of Hashimoto Thyroiditis.

    PubMed

    Rhee, Sun Jung; Hong, Hyun Sook; Kim, Chul-Hee; Lee, Eun Hye; Cha, Jang Gyu; Jeong, Sun Hye

    2015-12-01

    This study aimed to evaluate the usefulness of Acoustic Structure Quantification (ASQ; Toshiba Medical Systems Corporation, Nasushiobara, Japan) values in the diagnosis of Hashimoto thyroiditis using B-mode sonography and to identify a cutoff ASQ level that differentiates Hashimoto thyroiditis from normal thyroid tissue. A total of 186 thyroid lobes with Hashimoto thyroiditis and normal thyroid glands underwent sonography with ASQ imaging. The quantitative results were reported in an echo amplitude analysis (Cm(2)) histogram with average, mode, ratio, standard deviation, blue mode, and blue average values. Receiver operating characteristic curve analysis was performed to assess the diagnostic ability of the ASQ values in differentiating Hashimoto thyroiditis from normal thyroid tissue. Intraclass correlation coefficients of the ASQ values were obtained between 2 observers. Of the 186 thyroid lobes, 103 (55%) had Hashimoto thyroiditis, and 83 (45%) were normal. There was a significant difference between the ASQ values of Hashimoto thyroiditis glands and those of normal glands (P < .001). The ASQ values in patients with Hashimoto thyroiditis were significantly greater than those in patients with normal thyroid glands. The areas under the receiver operating characteristic curves for the ratio, blue average, average, blue mode, mode, and standard deviation were: 0.936, 0.902, 0.893, 0.855, 0.846, and 0.842, respectively. The ratio cutoff value of 0.27 offered the best diagnostic performance, with sensitivity of 87.38% and specificity of 95.18%. The intraclass correlation coefficients ranged from 0.86 to 0.94, which indicated substantial agreement between the observers. Acoustic Structure Quantification is a useful and promising sonographic method for diagnosing Hashimoto thyroiditis. Not only could it be a helpful tool for quantifying thyroid echogenicity, but it also would be useful for diagnosis of Hashimoto thyroiditis. © 2015 by the American Institute of Ultrasound in Medicine.

  13. Validation of XCO2 derived from SWIR spectra of GOSAT TANSO-FTS with aircraft measurement data

    NASA Astrophysics Data System (ADS)

    Inoue, M.; Morino, I.; Uchino, O.; Miyamoto, Y.; Yoshida, Y.; Yokota, T.; Machida, T.; Sawa, Y.; Matsueda, H.; Sweeney, C.; Tans, P. P.; Andrews, A. E.; Biraud, S. C.; Tanaka, T.; Kawakami, S.; Patra, P. K.

    2013-10-01

    Column-averaged dry air mole fractions of carbon dioxide (XCO2) retrieved from Greenhouse gases Observing SATellite (GOSAT) Short-Wavelength InfraRed (SWIR) observations were validated with aircraft measurements by the Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) project, the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the HIAPER Pole-to-Pole Observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. To calculate XCO2 based on aircraft measurements (aircraft-based XCO2), tower measurements and model outputs were used for additional information near the surface and above the tropopause, respectively. Before validation, we investigated the impacts of GOSAT SWIR column averaging kernels (CAKs) and the shape of a priori profiles on the aircraft-based XCO2 calculation. The differences between aircraft-based XCO2 with and without the application of GOSAT CAK were evaluated to be less than ±0.4 ppm at most, and less than ±0.1 ppm on average. Therefore, we concluded that the GOSAT CAK produces only a minor effect on the aircraft-based XCO2 calculation in terms of the overall uncertainty of GOSAT XCO2. We compared GOSAT data retrieved within ±2 or ±5° latitude/longitude boxes centered at each aircraft measurement site to aircraft-based data measured on a GOSAT overpass day. The results indicated that GOSAT XCO2 over land regions agreed with aircraft-based XCO2, except that the former is biased by -0.68 ppm (-0.99 ppm) with a standard deviation of 2.56 ppm (2.51 ppm), whereas the averages of the differences between the GOSAT XCO2 over ocean and the aircraft-based XCO2 were -1.82 ppm (-2.27 ppm) with a standard deviation of 1.04 ppm (1.79 ppm) for ±2° (±5°) boxes.

  14. Predicting Accommodative Response Using Paraxial Schematic Eye Models

    PubMed Central

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2016-01-01

    Purpose Prior ultrasound biomicroscopy (UBM) studies showed that accommodative optical response (AOR) can be predicted from accommodative biometric changes in a young and a pre-presbyopic population from linear relationships between accommodative optical and biometric changes, with a standard deviation of less than 0.55D. Here, paraxial schematic eyes (SE) were constructed from measured accommodative ocular biometry parameters to see if predictions are improved. Methods Measured ocular biometry (OCT, A-scan and UBM) parameters from 24 young and 24 pre-presbyopic subjects were used to construct paraxial SEs for each individual subject (individual SEs) for three different lens equivalent refractive index methods. Refraction and AOR calculated from the individual SEs were compared with Grand Seiko (GS) autorefractor measured refraction and AOR. Refraction and AOR were also calculated from individual SEs constructed using the average population accommodative change in UBM measured parameters (average SEs). Results Schematic eye calculated and GS measured AOR were linearly related (young subjects: slope = 0.77; r2 = 0.86; pre-presbyopic subjects: slope = 0.64; r2 = 0.55). The mean difference in AOR (GS - individual SEs) for the young subjects was −0.27D and for the pre-presbyopic subjects was 0.33D. For individual SEs, the mean ± SD of the absolute differences in AOR between the GS and SEs was 0.50 ± 0.39D for the young subjects and 0.50 ± 0.37D for the pre-presbyopic subjects. For average SEs, the mean ± SD of the absolute differences in AOR between the GS and the SEs was 0.77 ± 0.88D for the young subjects and 0.51 ± 0.49D for the pre-presbyopic subjects. Conclusions Individual paraxial SEs predict AOR, on average, with a standard deviation of 0.50D in young and pre-presbyopic subject populations. Although this prediction is only marginally better than from individual linear regressions, it does consider all the ocular biometric parameters. PMID:27092928

  15. Validation of XCH4 derived from SWIR spectra of GOSAT TANSO-FTS with aircraft measurement data

    NASA Astrophysics Data System (ADS)

    Inoue, M.; Morino, I.; Uchino, O.; Miyamoto, Y.; Saeki, T.; Yoshida, Y.; Yokota, T.; Sweeney, C.; Tans, P. P.; Biraud, S. C.; Machida, T.; Pittman, J. V.; Kort, E. A.; Tanaka, T.; Kawakami, S.; Sawa, Y.; Tsuboi, K.; Matsueda, H.

    2014-09-01

    Column-averaged dry-air mole fractions of methane (XCH4), retrieved from Greenhouse gases Observing SATellite (GOSAT) short-wavelength infrared (SWIR) spectra, were validated by using aircraft measurement data from the National Oceanic and Atmospheric Administration (NOAA), the US Department of Energy (DOE), the National Institute for Environmental Studies (NIES), the HIAPER Pole-to-Pole Observations (HIPPO) program, and the GOSAT validation aircraft observation campaign over Japan. In the calculation of XCH4 from aircraft measurements (aircraft-based XCH4), other satellite data were used for the CH4 profiles above the tropopause. We proposed a data-screening scheme for aircraft-based XCH4 for reliable validation of GOSAT XCH4. Further, we examined the impact of GOSAT SWIR column averaging kernels (CAK) on the aircraft-based XCH4 calculation and found that the difference between aircraft-based XCH4 with and without the application of the GOSAT CAK was less than ±9 ppb at maximum, with an average difference of -0.5 ppb. We compared GOSAT XCH4 Ver. 02.00 data retrieved within ±2° or ±5° latitude-longitude boxes centered at each aircraft measurement site with aircraft-based XCH4 measured on a GOSAT overpass day. In general, GOSAT XCH4 was in good agreement with aircraft-based XCH4. However, over land, the GOSAT data showed a positive bias of 1.5 ppb (2.0 ppb) with a standard deviation of 14.9 ppb (16.0 ppb) within the ±2° (±5°) boxes, and over ocean, the average bias was 4.1 ppb (6.5 ppb) with a standard deviation of 9.4 ppb (8.8 ppb) within the ±2° (±5°) boxes. In addition, we obtained similar results when we used an aircraft-based XCH4 time series obtained by curve fitting with temporal interpolation for comparison with GOSAT data.

  16. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, A; Viswanathan, A; Cormack, R

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less

  17. Climatology of Neutral vertical winds in the midlatitude thermosphere

    NASA Astrophysics Data System (ADS)

    Kerr, R.; Kapali, S.; Riccobono, J.; Migliozzi, M. A.; Noto, J.; Brum, C. G. M.; Garcia, R.

    2017-12-01

    More than one thousand measurements of neutral vertical winds, relative to an assumed average of 0 m/s during a nighttime period, have been made at Arecibo Observatory and the Millstone Hill Optical Facility since 2012, using imaging Fabry-Perot interferometers. These instruments, tuned to the 630 nm OI emission, are carefully calibrated for instrumental frequency drift using frequency stabilized lasers, allowing isolation of Doppler motion in the zenith with 1-2 m/s accuracy. As one example of the results, relative vertical winds at Arecibo during quiet geomagnetic conditions near winter solstice 2016, range ±70 m/s and have a one standard deviation statistical variability of ±34 m/s. This compares with a ±53 m/s deviation from the average meridional wind, and a ±56 m/s deviation from the average zonal wind measured during the same period. Vertical neutral wind velocities for all periods range from roughly 30% - 60% of the horizontal velocity domain at Arecibo. At Millstone Hill, the vertical velocities relative to horizontal velocities are similar, but slightly smaller. The midnight temperature maximum at Arecibo is usually correlated with a surge in the upward wind, and vertical wind excursions of more than 80 m/s are common during magnetic storms at both sites. Until this compilation of vertical wind climatology, vertical motions of the neutral atmosphere outside of the auroral zone have generally been assumed to be very small compared to horizontal transport. In fact, excursions from small vertical velocities in the mid-latitude thermosphere near the F2 ionospheric peak are common, and are not isolated events associated with unsettled geomagnetic conditions or other special dynamic conditions.

  18. Flexner 2.0-Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona.

    PubMed

    Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.

  19. Flexner 2.0—Longitudinal Study of Student Participation in a Campus-Wide General Pathology Course for Graduate Students at The University of Arizona

    PubMed Central

    Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.

    2016-01-01

    Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783

  20. Activation of N-methyl-d-aspartate receptors reduces heart rate variability and facilitates atrial fibrillation in rats.

    PubMed

    Shi, Shaobo; Liu, Tao; Wang, Dandan; Zhang, Yan; Liang, Jinjun; Yang, Bo; Hu, Dan

    2017-07-01

    The goal of this study was to assess the effects of N-methyl-d-aspartate (NMDA) receptors activation on heart rate variability (HRV) and susceptibility to atrial fibrillation (AF). Rats were randomized for treatment with saline, NMDA (agonist of NMDA receptors), or NMDA plus MK-801 (antagonist of NMDA receptors) for 2 weeks. Heart rate variability was evaluated by using implantable electrocardiogram telemeters. Atrial fibrillation susceptibility was assessed with programmed stimulation in isolated hearts. Compared with the controls, the NMDA-treated rats displayed a decrease in the standard deviation of normal RR intervals, the standard deviation of the average RR intervals, the mean of the 5-min standard deviations of RR intervals, the root mean square of successive differences, and high frequency (HF); and an increase in low frequency (LF) and LF/HF (all P< 0.01). Additionally, the NMDA-treated rats showed prolonged activation latency and reduced effective refractory period (all P< 0.01). Importantly, AF was induced in all NMDA-treated rats. While atrial fibrosis developed, connexin40 downgraded and metalloproteinase 9 upgraded in the NMDA-treated rats (all P< 0.01). Most of the above alterations were mitigated by co-administering with MK-801. These results indicate that NMDA receptors activation reduces HRV and enhances AF inducibility, with cardiac autonomic imbalance, atrial fibrosis, and degradation of gap junction protein identified as potential mechanistic contributors. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  1. Evaluation of Two New Indices of Blood Pressure Variability Using Postural Change in Older Fallers.

    PubMed

    Goh, Choon-Hian; Ng, Siew-Cheok; Kamaruzzaman, Shahrul B; Chin, Ai-Vyrn; Poi, Philip J H; Chee, Kok Han; Imran, Z Abidin; Tan, Maw Pin

    2016-05-01

    To evaluate the utility of blood pressure variability (BPV) calculated using previously published and newly introduced indices using the variables falls and age as comparators.While postural hypotension has long been considered a risk factor for falls, there is currently no documented evidence on the relationship between BPV and falls.A case-controlled study involving 25 fallers and 25 nonfallers was conducted. Systolic (SBPV) and diastolic blood pressure variability (DBPV) were assessed using 5 indices: standard deviation (SD), standard deviation of most stable continuous 120 beats (staSD), average real variability (ARV), root mean square of real variability (RMSRV), and standard deviation of real variability (SDRV). Continuous beat-to-beat blood pressure was recorded during 10 minutes' supine rest and 3 minutes' standing.Standing SBPV was significantly higher than supine SBPV using 4 indices in both groups. The standing-to-supine-BPV ratio (SSR) was then computed for each subject (staSD, ARV, RMSRV, and SDRV). Standing-to-supine ratio for SBPV was significantly higher among fallers compared to nonfallers using RMSRV and SDRV (P = 0.034 and P = 0.025). Using linear discriminant analysis (LDA), 3 indices (ARV, RMSRV, and SDRV) of SSR SBPV provided accuracies of 61.6%, 61.2%, and 60.0% for the prediction of falls which is comparable with timed-up and go (TUG), 64.4%.This study suggests that SSR SBPV using RMSRV and SDRV is a potential predictor for falls among older patients, and deserves further evaluation in larger prospective studies.

  2. Site-specific 13C content by quantitative isotopic 13C nuclear magnetic resonance spectrometry: a pilot inter-laboratory study.

    PubMed

    Chaintreau, Alain; Fieber, Wolfgang; Sommer, Horst; Gilbert, Alexis; Yamada, Keita; Yoshida, Naohiro; Pagelot, Alain; Moskau, Detlef; Moreno, Aitor; Schleucher, Jürgen; Reniero, Fabiano; Holland, Margaret; Guillou, Claude; Silvestre, Virginie; Akoka, Serge; Remaud, Gérald S

    2013-07-25

    Isotopic (13)C NMR spectrometry, which is able to measure intra-molecular (13)C composition, is of emerging demand because of the new information provided by the (13)C site-specific content of a given molecule. A systematic evaluation of instrumental behaviour is of importance to envisage isotopic (13)C NMR as a routine tool. This paper describes the first collaborative study of intra-molecular (13)C composition by NMR. The main goals of the ring test were to establish intra- and inter-variability of the spectrometer response. Eight instruments with different configuration were retained for the exercise on the basis of a qualification test. Reproducibility at the natural abundance of isotopic (13)C NMR was then assessed on vanillin from three different origins associated with specific δ (13)Ci profiles. The standard deviation was, on average, between 0.9 and 1.2‰ for intra-variability. The highest standard deviation for inter-variability was 2.1‰. This is significantly higher than the internal precision but could be considered good in respect of a first ring test on a new analytical method. The standard deviation of δ (13)Ci in vanillin was not homogeneous over the eight carbons, with no trend either for the carbon position or for the configuration of the spectrometer. However, since the repeatability for each instrument was satisfactory, correction factors for each carbon in vanillin could be calculated to harmonize the results. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. On the Distribution of Protein Refractive Index Increments

    PubMed Central

    Zhao, Huaying; Brown, Patrick H.; Schuck, Peter

    2011-01-01

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. PMID:21539801

  4. On the distribution of protein refractive index increments.

    PubMed

    Zhao, Huaying; Brown, Patrick H; Schuck, Peter

    2011-05-04

    The protein refractive index increment, dn/dc, is an important parameter underlying the concentration determination and the biophysical characterization of proteins and protein complexes in many techniques. In this study, we examine the widely used assumption that most proteins have dn/dc values in a very narrow range, and reappraise the prediction of dn/dc of unmodified proteins based on their amino acid composition. Applying this approach in large scale to the entire set of known and predicted human proteins, we obtain, for the first time, to our knowledge, an estimate of the full distribution of protein dn/dc values. The distribution is close to Gaussian with a mean of 0.190 ml/g (for unmodified proteins at 589 nm) and a standard deviation of 0.003 ml/g. However, small proteins <10 kDa exhibit a larger spread, and almost 3000 proteins have values deviating by more than two standard deviations from the mean. Due to the widespread availability of protein sequences and the potential for outliers, the compositional prediction should be convenient and provide greater accuracy than an average consensus value for all proteins. We discuss how this approach should be particularly valuable for certain protein classes where a high dn/dc is coincidental to structural features, or may be functionally relevant such as in proteins of the eye. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Increasing market efficiency in the stock markets

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook

    2008-01-01

    We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.

  6. Twenty-Four-Hour Blood Pressure Monitoring to Predict and Assess Impact of Renal Denervation: The DENERHTN Study (Renal Denervation for Hypertension).

    PubMed

    Gosse, Philippe; Cremer, Antoine; Pereira, Helena; Bobrie, Guillaume; Chatellier, Gilles; Chamontin, Bernard; Courand, Pierre-Yves; Delsart, Pascal; Denolle, Thierry; Dourmap, Caroline; Ferrari, Emile; Girerd, Xavier; Michel Halimi, Jean; Herpin, Daniel; Lantelme, Pierre; Monge, Matthieu; Mounier-Vehier, Claire; Mourad, Jean-Jacques; Ormezzano, Olivier; Ribstein, Jean; Rossignol, Patrick; Sapoval, Marc; Vaïsse, Bernard; Zannad, Faiez; Azizi, Michel

    2017-03-01

    The DENERHTN trial (Renal Denervation for Hypertension) confirmed the blood pressure (BP) lowering efficacy of renal denervation added to a standardized stepped-care antihypertensive treatment for resistant hypertension at 6 months. We report here the effect of denervation on 24-hour BP and its variability and look for parameters that predicted the BP response. Patients with resistant hypertension were randomly assigned to denervation plus stepped-care treatment or treatment alone (control). Average and standard deviation of 24-hour, daytime, and nighttime BP and the smoothness index were calculated on recordings performed at randomization and 6 months. Responders were defined as a 6-month 24-hour systolic BP reduction ≥20 mm Hg. Analyses were performed on the per-protocol population. The significantly greater BP reduction in the denervation group was associated with a higher smoothness index ( P =0.02). Variability of 24-hour, daytime, and nighttime BP did not change significantly from baseline to 6 months in both groups. The number of responders was greater in the denervation (20/44, 44.5%) than in the control group (11/53, 20.8%; P =0.01). In the discriminant analysis, baseline average nighttime systolic BP and standard deviation were significant predictors of the systolic BP response in the denervation group only, allowing adequate responder classification of 70% of the patients. Our results show that denervation lowers ambulatory BP homogeneously over 24 hours in patients with resistant hypertension and suggest that nighttime systolic BP and variability are predictors of the BP response to denervation. URL: https://www.clinicaltrials.gov. Unique identifier: NCT01570777. © 2017 American Heart Association, Inc.

  7. A QSPR model for prediction of diffusion coefficient of non-electrolyte organic compounds in air at ambient condition.

    PubMed

    Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi

    2012-03-01

    Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. [Biomechanical significance of the acetabular roof and its reaction to mechanical injury].

    PubMed

    Domazet, N; Starović, D; Nedeljković, R

    1999-01-01

    The introduction of morphometry into the quantitative analysis of the bone system and functional adaptation of acetabulum to mechanical damages and injuries enabled a relatively simple and acceptable examination of morphological acetabular changes in patients with damaged hip joints. Measurements of the depth and form of acetabulum can be done by radiological methods, computerized tomography and ultrasound (1-9). The aim of the study was to obtain data on the behaviour of acetabular roof, the so-called "eyebrow", by morphometric analyses during different mechanical injuries. Clinical studies of the effect of different loads on acetabular roof were carried out in 741 patients. Radiographic findings of 400 men and 341 women were analysed. The control group was composed of 148 patients with normal hip joints. Average age of the patients was 54.7 years and that of control subjects 52.0 years. Data processing was done for all examined patients. On the basis of our measurements the average size of female "eyebrow" ranged from 24.8 mm to 31.5 mm with standard deviation of 0.93 and in men from 29.4 mm to 40.3 mm with standard deviation of 1.54. The average size in the whole population was 32.1 mm with standard deviation of 15.61. Statistical analyses revealed high correlation coefficients between the age and "eyebrow" size in men (r = 0.124; p < 0.05); it was statically in inverse proportion (Graph 1). However, in female patients the correlation coefficient was statistically significant (r = 0.060; p > 0.05). The examination of the size of collodiaphysial angle and length of "eyebrow" revealed that "eyebrow" length was in inverse proportion to the size of collodiaphysial angle (r = 0.113; p < 0.05). The average "eyebrow" length in relation to the size of collodiaphysial angle ranged from 21.3 mm to 35.2 mm with standard deviation of 1.60. There was no statistically significant correlation between the "eyebrow" size and Wiberg's angle in male (r = 0.049; p > 0.05) and female (r = 0.005; p > 0.05) patients. The "eyebrow" length was proportionally dependent on the size of the shortened extremity in all examined subjects. This dependence was statistically significant both in female (r = 0.208; p < 0.05) and male (r = 0.193; p < 0.05) patients. The study revealed that fossa acetabuli was forward and downward laterally directed. The size, form and cross-section of acetabulum changed during different loads. Dimensions and morphological changes in acetabulum showed some but unimportant changes in comparison to that in the control group. These findings are graphically presented in Figure 5 and numerically in Tables 1 and 2. The study of spatial orientation among hip joints revealed that fossa acetabuli was forward and downward laterally directed; this was in accordance with results other authors (1, 7, 9, 15, 18). There was a statistically significant difference in relation to the "eyebrow" size between patients and normal subjects (t = 3.88; p < 0.05). The average difference of "eyebrow" size was 6.892 mm. A larger "eyebrow" was found in patients with normally loaded hip. There was also a significant difference in "eyebrow" size between patients and healthy female subjects (t = 4.605; p < 0.05). A larger "eyebrow" of 8.79 mm was found in female subjects with normally loaded hip. On the basis of our study it can be concluded that the findings related to changes in acetabular roof, the so-called "eyebrow", are important in diagnosis, follow-up and therapy of pathogenetic processes of these disorders.

  9. Preconditioning of Interplanetary Space Due to Transient CME Disturbances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temmer, M.; Reiss, M. A.; Hofmeister, S. J.

    Interplanetary space is characteristically structured mainly by high-speed solar wind streams emanating from coronal holes and transient disturbances such as coronal mass ejections (CMEs). While high-speed solar wind streams pose a continuous outflow, CMEs abruptly disrupt the rather steady structure, causing large deviations from the quiet solar wind conditions. For the first time, we give a quantification of the duration of disturbed conditions (preconditioning) for interplanetary space caused by CMEs. To this aim, we investigate the plasma speed component of the solar wind and the impact of in situ detected interplanetary CMEs (ICMEs), compared to different background solar wind modelsmore » (ESWF, WSA, persistence model) for the time range 2011–2015. We quantify in terms of standard error measures the deviations between modeled background solar wind speed and observed solar wind speed. Using the mean absolute error, we obtain an average deviation for quiet solar activity within a range of 75.1–83.1 km s{sup −1}. Compared to this baseline level, periods within the ICME interval showed an increase of 18%–32% above the expected background, and the period of two days after the ICME displayed an increase of 9%–24%. We obtain a total duration of enhanced deviations over about three and up to six days after the ICME start, which is much longer than the average duration of an ICME disturbance itself (∼1.3 days), concluding that interplanetary space needs ∼2–5 days to recover from the impact of ICMEs. The obtained results have strong implications for studying CME propagation behavior and also for space weather forecasting.« less

  10. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Measuring real-time streamflow using emerging technologies: Radar, hydroacoustics, and the probability concept

    NASA Astrophysics Data System (ADS)

    Fulton, John; Ostrowski, Joseph

    2008-07-01

    SummaryForecasting streamflow during extreme hydrologic events such as floods can be problematic. This is particularly true when flow is unsteady, and river forecasts rely on models that require uniform-flow rating curves to route water from one forecast point to another. As a result, alternative methods for measuring streamflow are needed to properly route flood waves and account for inertial and pressure forces in natural channels dominated by nonuniform-flow conditions such as mild water surface slopes, backwater, tributary inflows, and reservoir operations. The objective of the demonstration was to use emerging technologies to measure instantaneous streamflow in open channels at two existing US Geological Survey streamflow-gaging stations in Pennsylvania. Surface-water and instream-point velocities were measured using hand-held radar and hydroacoustics. Streamflow was computed using the probability concept, which requires velocity data from a single vertical containing the maximum instream velocity. The percent difference in streamflow at the Susquehanna River at Bloomsburg, PA ranged from 0% to 8% with an average difference of 4% and standard deviation of 8.81 m 3/s. The percent difference in streamflow at Chartiers Creek at Carnegie, PA ranged from 0% to 11% with an average difference of 5% and standard deviation of 0.28 m 3/s. New generation equipment is being tested and developed to advance the use of radar-derived surface-water velocity and instantaneous streamflow to facilitate the collection and transmission of real-time streamflow that can be used to parameterize hydraulic routing models.

  12. Moving to Opportunity and Mental Health: Exploring the Spatial Context of Neighborhood Effects

    PubMed Central

    Arcaya, Mariana C; Diez Roux, Ana V.

    2016-01-01

    Studies of housing mobility and neighborhood effects on health often treat neighborhoods as if they were isolated islands. This paper argues that conceptualizing neighborhoods as part of the wider spatial context within which they are embedded may be key in advancing our understanding of the role of local context in the life of urban dwellers. Analyses are based on mental health and neighborhood context measurements taken on over 3,000 low-income families who participated in the Moving to Opportunity for Fair Housing Demonstration Program (MTO), a large field experiment in five major U.S. cities. Results from analyses of two survey waves combined with Census data at different geographic scales indicate that assignment to MTO's experimental condition of neighborhood poverty <10% significantly decreased average exposure to immediate and surrounding neighborhood disadvantage by 97% and 59% of a standard deviation, respectively, relative to the control group. Escaping concentrated disadvantage in either the immediate neighborhood or the surrounding neighborhood, but not both, was insufficient to make a difference for mental health. Instead, the results suggest that improving both the immediate and surrounding neighborhoods significantly benefits mental health. Compared to remaining in concentrated disadvantage in the immediate and surrounding neighborhood, escaping concentrated disadvantage in both the immediate and surrounding neighborhood on average over the study duration as a result of the intervention predicts an increase of 25% of a standard deviation in the composite mental health scores. PMID:27337349

  13. Apparent diffusion coefficient histogram shape analysis for monitoring early response in patients with advanced cervical cancers undergoing concurrent chemo-radiotherapy.

    PubMed

    Meng, Jie; Zhu, Lijing; Zhu, Li; Wang, Huanhuan; Liu, Song; Yan, Jing; Liu, Baorui; Guan, Yue; Ge, Yun; He, Jian; Zhou, Zhengyang; Yang, Xiaofeng

    2016-10-22

    To explore the role of apparent diffusion coefficient (ADC) histogram shape related parameters in early assessment of treatment response during the concurrent chemo-radiotherapy (CCRT) course of advanced cervical cancers. This prospective study was approved by the local ethics committee and informed consent was obtained from all patients. Thirty-two patients with advanced cervical squamous cell carcinomas underwent diffusion weighted magnetic resonance imaging (b values, 0 and 800 s/mm 2 ) before CCRT, at the end of 2nd and 4th week during CCRT and immediately after CCRT completion. Whole lesion ADC histogram analysis generated several histogram shape related parameters including skewness, kurtosis, s-sD av , width, standard deviation, as well as first-order entropy and second-order entropies. The averaged ADC histograms of 32 patients were generated to visually observe dynamic changes of the histogram shape following CCRT. All parameters except width and standard deviation showed significant changes during CCRT (all P < 0.05), and their variation trends fell into four different patterns. Skewness and kurtosis both showed high early decline rate (43.10 %, 48.29 %) at the end of 2nd week of CCRT. All entropies kept decreasing significantly since 2 weeks after CCRT initiated. The shape of averaged ADC histogram also changed obviously following CCRT. ADC histogram shape analysis held the potential in monitoring early tumor response in patients with advanced cervical cancers undergoing CCRT.

  14. Measuring real-time streamflow using emerging technologies: Radar, hydroacoustics, and the probability concept

    USGS Publications Warehouse

    Fulton, J.; Ostrowski, J.

    2008-01-01

    Forecasting streamflow during extreme hydrologic events such as floods can be problematic. This is particularly true when flow is unsteady, and river forecasts rely on models that require uniform-flow rating curves to route water from one forecast point to another. As a result, alternative methods for measuring streamflow are needed to properly route flood waves and account for inertial and pressure forces in natural channels dominated by nonuniform-flow conditions such as mild water surface slopes, backwater, tributary inflows, and reservoir operations. The objective of the demonstration was to use emerging technologies to measure instantaneous streamflow in open channels at two existing US Geological Survey streamflow-gaging stations in Pennsylvania. Surface-water and instream-point velocities were measured using hand-held radar and hydroacoustics. Streamflow was computed using the probability concept, which requires velocity data from a single vertical containing the maximum instream velocity. The percent difference in streamflow at the Susquehanna River at Bloomsburg, PA ranged from 0% to 8% with an average difference of 4% and standard deviation of 8.81 m3/s. The percent difference in streamflow at Chartiers Creek at Carnegie, PA ranged from 0% to 11% with an average difference of 5% and standard deviation of 0.28 m3/s. New generation equipment is being tested and developed to advance the use of radar-derived surface-water velocity and instantaneous streamflow to facilitate the collection and transmission of real-time streamflow that can be used to parameterize hydraulic routing models.

  15. Teleconference versus face-to-face scientific peer review of grant application: effects on review outcomes.

    PubMed

    Gallo, Stephen A; Carpenter, Afton S; Glisson, Scott R

    2013-01-01

    Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects.

  16. Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment.

    PubMed

    Zhou, Mei; Jin, Kai; Wang, Shaoze; Ye, Juan; Qian, Dahong

    2018-03-01

    Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L * a * b * color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0-1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.

  17. Effectiveness of various innovative learning methods in health science classrooms: a meta-analysis.

    PubMed

    Kalaian, Sema A; Kasim, Rafa M

    2017-12-01

    This study reports the results of a meta-analysis of the available literature on the effectiveness of various forms of innovative small-group learning methods on student achievement in undergraduate college health science classrooms. The results of the analysis revealed that most of the primary studies supported the effectiveness of the small-group learning methods in improving students' academic achievement with an overall weighted average effect-size of 0.59 in standard deviation units favoring small-group learning methods. The subgroup analysis showed that the various forms of innovative and reform-based small-group learning interventions appeared to be significantly more effective for students in higher levels of college classes (sophomore, junior, and senior levels), students in other countries (non-U.S.) worldwide, students in groups of four or less, and students who choose their own group. The random-effects meta-regression results revealed that the effect sizes were influenced significantly by the instructional duration of the primary studies. This means that studies with longer hours of instruction yielded higher effect sizes and on average every 1 h increase in instruction, the predicted increase in effect size was 0.009 standard deviation units, which is considered as a small effect. These results may help health science and nursing educators by providing guidance in identifying the conditions under which various forms of innovative small-group learning pedagogies are collectively more effective than the traditional lecture-based teaching instruction.

  18. How reliable is apparent age at death on cadavers?

    PubMed

    Amadasi, Alberto; Merusi, Nicolò; Cattaneo, Cristina

    2015-07-01

    The assessment of age at death for identification purposes is a frequent and tough challenge for forensic pathologists and anthropologists. Too frequently, visual assessment of age is performed on well-preserved corpses, a method considered subjective and full of pitfalls, but whose level of inadequacy no one has yet tested or proven. This study consisted in the visual estimation of the age of 100 cadavers performed by a total of 37 observers among those usually attending the dissection room. Cadavers were of Caucasian ethnicity, well preserved, belonging to individuals who died of natural death. All the evaluations were performed prior to autopsy. Observers assessed the age with ranges of 5 and 10 years, indicating also the body part they mainly observed for each case. Globally, the 5-year range had an accuracy of 35%, increasing to 69% with the 10-year range. The highest accuracy was in the 31-60 age category (74.7% with the 10-year range), and the skin seemed to be the most reliable age parameter (71.5% of accuracy when observed), while the face was considered most frequently, in 92.4% of cases. A simple formula with the general "mean of averages" in the range given by the observers and related standard deviations was then developed; the average values with standard deviations of 4.62 lead to age estimation with ranges of some 20 years that seem to be fairly reliable and suitable, sometimes in alignment with classic anthropological methods, in the age estimation of well-preserved corpses.

  19. X-Pol Potential: An Electronic Structure-Based Force Field for Molecular Dynamics Simulation of a Solvated Protein in Water.

    PubMed

    Xie, Wangshen; Orozco, Modesto; Truhlar, Donald G; Gao, Jiali

    2009-02-17

    A recently proposed electronic structure-based force field called the explicit polarization (X-Pol) potential is used to study many-body electronic polarization effects in a protein, in particular by carrying out a molecular dynamics (MD) simulation of bovine pancreatic trypsin inhibitor (BPTI) in water with periodic boundary conditions. The primary unit cell is cubic with dimensions ~54 × 54 × 54 Å(3), and the total number of atoms in this cell is 14281. An approximate electronic wave function, consisting of 29026 basis functions for the entire system, is variationally optimized to give the minimum Born-Oppenheimer energy at every MD step; this allows the efficient evaluation of the required analytic forces for the dynamics. Intramolecular and intermolecular polarization and intramolecular charge transfer effects are examined and are found to be significant; for example, 17 out of 58 backbone carbonyls differ from neutrality on average by more than 0.1 electron, and the average charge on the six alanines varies from -0.05 to +0.09. The instantaneous excess charges vary even more widely; the backbone carbonyls have standard deviations in their fluctuating net charges from 0.03 to 0.05, and more than half of the residues have excess charges whose standard deviation exceeds 0.05. We conclude that the new-generation X-Pol force field permits the inclusion of time-dependent quantum mechanical polarization and charge transfer effects in much larger systems than was previously possible.

  20. Design, implementation and accuracy of a prototype for medical augmented reality.

    PubMed

    Pandya, Abhilash; Siadat, Mohammad-Reza; Auner, Greg

    2005-01-01

    This paper is focused on prototype development and accuracy evaluation of a medical Augmented Reality (AR) system. The accuracy of such a system is of critical importance for medical use, and is hence considered in detail. We analyze the individual error contributions and the system accuracy of the prototype. A passive articulated arm is used to track a calibrated end-effector-mounted video camera. The live video view is superimposed in real time with the synchronized graphical view of CT-derived segmented object(s) of interest within a phantom skull. The AR accuracy mostly depends on the accuracy of the tracking technology, the registration procedure, the camera calibration, and the image scanning device (e.g., a CT or MRI scanner). The accuracy of the Microscribe arm was measured to be 0.87 mm. After mounting the camera on the tracking device, the AR accuracy was measured to be 2.74 mm on average (standard deviation = 0.81 mm). After using data from a 2-mm-thick CT scan, the AR error remained essentially the same at an average of 2.75 mm (standard deviation = 1.19 mm). For neurosurgery, the acceptable error is approximately 2-3 mm, and our prototype approaches these accuracy requirements. The accuracy could be increased with a higher-fidelity tracking system and improved calibration and object registration. The design and methods of this prototype device can be extrapolated to current medical robotics (due to the kinematic similarity) and neuronavigation systems.

  1. ESTIMATION OF EFFECTIVE SHEAR STRESS WORKING ON FLAT SHEET MEMBRANE USING FLUIDIZED MEDIA IN MBRs

    NASA Astrophysics Data System (ADS)

    Zaw, Hlwan Moe; Li, Tairi; Nagaoka, Hiroshi; Mishima, Iori

    This study was aimed at estimating effective shear stress working on flat sheet membrane by the addition of fluidized media in MBRs. In both of laboratory-scale aeration tanks with and without fluidized media, shear stress variations on membrane surface and water phase velocity variations were measured and MBR operation was conducted. For the evaluation of the effective shear stress working on membrane surface to mitigate membrane surface, simulation of trans-membrane pressure increase was conducted. It was shown that the time-averaged absolute value of shear stress was smaller in the reactor with fluidized media than without fluidized media. However, due to strong turbulence in the reactor with fluidized media caused by interaction between water-phase and media and also due to the direct interaction between membrane surface and fluidized media, standard deviation of shear stress on membrane surface was larger in the reactor with fluidized media than without media. Histograms of shear stress variation data were fitted well to normal distribution curves and mean plus three times of standard deviation was defined to be a maximum shear stress value. By applying the defined maximum shear stress to a membrane fouling model, trans-membrane pressure curve in the MBR experiment was simulated well by the fouling model indicting that the maximum shear stress, not time-averaged shear stress, can be regarded as an effective shear stress to prevent membrane fouling in submerged flat-sheet MBRs.

  2. Geochemical fingerprinting and source discrimination in soils at the continental scale

    NASA Astrophysics Data System (ADS)

    Negrel, Philippe; Sadeghi, Martiya; Ladenberger, Anna; Birke, Manfred; Reimann, Clemens

    2014-05-01

    Agricultural soil (Ap-horizon, 0-20 cm) samples were collected from a large part of Europe (33 countries, 5.6 million km2) at an average density of 1 sample site per 2500 km2. The resulting 2108 soil samples were air dried, sieved to <2 mm, milled and analysed for their major and trace element concentrations by wavelength dispersive X-ray fluorescence spectrometry (WD-XRF). The main goal of this study is to provide a global view of element mobility and source rocks at the continent scale, either by reference to crustal evolution or normalized patterns of element mobility during weathering processes. The survey area includes several sedimentary basins with different geological history, developed in different climate zones and landscapes and with different land use. In order to normalize the chemical composition of soils, mean values and standard deviation of the selected elements have been checked against values for the upper continental crust (UCC). Some elements turned out to be enriched relative to the UCC (Al, P, Zr, Pb) whereas others, like Mg, Na, Sr and Pb were depleted with regards to the variation represented by the standard deviation. The concept of UCC extended normalization patterns have been further used for the selected elements. The mean value of Rb, K, Y, Ti, Al, Si, Zr, Ce and Fe are very close to the UCC model even if standard deviation suggests slight enrichment or depletion, and Zr shows the best fit with the UCC model using both mean value and standard deviation. Lead and Cr are enriched in European soils when compared to UCC but their standard deviation values show very large variations, particularly towards very low values, which can be interpreted as a lithological effect. Element variability has been explored by looking at the variations using indicator elements. Soil data have been converted into Al-normalized enrichment factors and Na was applied as normalizing element for studying provenance source taking into account the main lithologies of the UCC. This latter normalization highlighted variations related to the soluble and insoluble behavior of some elements (K, Rb versus Ti, Al, Si, V, Y, Zr, Ba, and La, respectively), their reactivity (Fe, Mn, Zn), association with carbonates (Ca and Sr) and with phosphates (P and Ce). The maps of normalized composition revealed some problems with use of classical element ratios due to genetical differences in composition of parent material reflected, for example, in large differences in titanium content in bedrock and soil throughout the Europe.

  3. SU-E-T-558: Assessing the Effect of Inter-Fractional Motion in Esophageal Sparing Plans.

    PubMed

    Williamson, R; Bluett, J; Niedzielski, J; Liao, Z; Gomez, D; Court, L

    2012-06-01

    To compare esophageal dose distributions in esophageal sparing IMRT plans with predicted dose distributions which include the effect of inter-fraction motion. Seven lung cancer patients were used, each with a standard and an esophageal sparing plan (74Gy, 2Gy fractions). The average max dose to esophagus was 8351cGy and 7758cGy for the standard and sparing plans, respectively. The average length of esophagus for which the total circumference was treated above 60Gy (LETT60) was 9.4cm in the standard plans and 5.8cm in the sparing plans. In order to simulate inter-fractional motion, a three-dimensional rigid shift was applied to the calculated dose field. A simulated course of treatment consisted of a single systematic shift applied throughout the treatment as well a random shift for each of the 37 fractions. Both systematic and random shifts were generated from Gaussian distributions of 3mm and 5mm standard deviation. Each treatment course was simulated 1000 times to obtain an expected distribution of the delivered dose. Simulated treatment dose received by the esophagus was less than dose seen in the treatment plan. The average reduction in maximum esophageal dose for the standard plans was 234cGy and 386cGY for the 3mm and 5mm Gaussian distributions, respectively. The average reduction in LETT60 was 0.6cm and 1.7cm, for the 3mm and 5mm distributions respectively. For the esophageal sparing plans, the average reduction in maximum esophageal dose was 94cGy and 202cGy for 3mm and 5mm Gaussian distributions, respectively. The average change in LETT60 for the esophageal sparing plans was smaller, at 0.1cm (increase) and 0.6cm (reduction), for the 3mm and 5mm distributions, respectively. Interfraction motion consistently reduced the maximum doses to the esophagus for both standard and esophageal sparing plans. © 2012 American Association of Physicists in Medicine.

  4. Do Practical Standard Coupled Cluster Calculations Agree Better than Kohn–Sham Calculations with Currently Available Functionals When Compared to the Best Available Experimental Data for Dissociation Energies of Bonds to 3d Transition Metals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng

    2015-05-12

    Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less

  5. 40 CFR 62.14740 - What must I include in the deviation report?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... your unit deviated from the emission limitations or operating limit requirements. (b) The averaged and recorded data for those dates. (c) Duration and causes of each deviation from the emission limitations or... that deviated from the emission limitations or operating limits specified in this subpart, include the...

  6. Anthropometric Measurement Standardization in the US-Affiliated Pacific: Report from the Children’s Healthy Living Program

    PubMed Central

    LI, FENFANG; WILKENS, LYNNE R.; NOVOTNY, RACHEL; FIALKOWSKI, MARIE K.; PAULINO, YVETTE C.; NELSON, RANDALL; BERSAMIN, ANDREA; MARTIN, URSULA; DEENIK, JONATHAN; BOUSHEY, CAROL J.

    2016-01-01

    Objectives Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children’s Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai‘i. Methods Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. Results From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Conclusions Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. PMID:26457888

  7. A Note on Standard Deviation and Standard Error

    ERIC Educational Resources Information Center

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  8. Frequency comparison involving the Romanian primary length standard RO.1 within the framework of the EUROMET Project #498

    NASA Astrophysics Data System (ADS)

    Popescu, Gheorghe

    2001-06-01

    An international frequency comparison was carried out at the Bundesamt fuer Eich- und Vermessungswessen (BEV), Vienna, within the framework of the EUROMET Project #498 from August 29 to September 5, 1999. The frequency differences obtained when the RO.1 laser from the National Institute for Laser, Plasma and Radiation Physics (NILPRP), Romania, was compared with five lasers from Austria (BEV1), Czech Republic (PLD1), France (BIPM3), Poland (GUM1) and Hungary (OMH1) are reported. Frequency differences were computed by using the matrix determinations for the group d, e, f, g. Considering the frequency differences measured for a group of three lasers compared to each other, we call the closing frequency the difference between measured and expected frequency difference (resulting from the previous two measurements). For the RO1 laser, when the BIPM3 laser was the reference laser, the closing frequencies range from +8.1 kHz to - 3.8 kHz. The relative Allan standard deviation was used to express the frequency stability and resulted 3.8 parts in 1012 for 100 s sampling time and 14000 s duration of the measurements. The averaged offset frequency relative to the BIPM4 stationary laser was 5.6 kHz and the standard deviation was 9.9 kHz.

  9. An Evaluation of a Smartphone–Assisted Behavioral Weight Control Intervention for Adolescents: Pilot Study

    PubMed Central

    Duncombe, Kristina M; Lott, Mark A; Hunsaker, Sanita L; Duraccio, Kara M; Woolford, Susan J

    2016-01-01

    Background The efficacy of adolescent weight control treatments is modest, and effective treatments are costly and are not widely available. Smartphones may be an effective method for delivering critical components of behavioral weight control treatment including behavioral self-monitoring. Objective To examine the efficacy and acceptability of a smartphone assisted adolescent behavioral weight control intervention. Methods A total of 16 overweight or obese adolescents (mean age=14.29 years, standard deviation=1.12) received 12 weeks of combined treatment that consisted of weekly in-person group behavioral weight control treatment sessions plus smartphone self-monitoring and daily text messaging. Subsequently they received 12 weeks of electronic-only intervention, totaling 24 weeks of intervention. Results On average, participants attained modest but significant reductions in body mass index standard score (zBMI: 0.08 standard deviation units, t (13)=2.22, P=.04, d=0.63) over the in-person plus electronic-only intervention period but did not maintain treatment gains over the electronic-only intervention period. Participants self-monitored on approximately half of combined intervention days but less than 20% of electronic-only intervention days. Conclusions Smartphones likely hold promise as a component of adolescent weight control interventions but they may be less effective in helping adolescents maintain treatment gains after intensive interventions. PMID:27554704

  10. An analysis of the readability of patient information and consent forms used in research studies in anaesthesia in Australia and New Zealand.

    PubMed

    Taylor, H E; Bramley, D E P

    2012-11-01

    The provision of written information is a component of the informed consent process for research participants. We conducted a readability analysis to test the hypothesis that the language used in patient information and consent forms in anaesthesia research in Australia and New Zealand does not meet the readability standards or expectations of the Good Clinical Practice Guidelines, the National Health and Medical Research Council in Australia and the Health Research Council of New Zealand. We calculated readability scores for 40 patient information and consent forms using the Simple Measure of Gobbledygook and Flesch-Kincaid formulas. The mean grade level of patient information and consent forms when using the Simple Measure of Gobbledygook and Flesch-Kincaid readability formulas was 12.9 (standard deviation of 0.8, 95% confidence interval 12.6 to 13.1) and 11.9 (standard deviation 1.1, 95% confidence interval 11.6 to 12.3), respectively. This exceeds the average literacy and comprehension of the general population in Australia and New Zealand. Complex language decreases readability and negatively impacts on the informed consent process. Care should be exercised when providing written information to research participants to ensure language and readability is appropriate for the audience.

  11. Determination of cyflumetofen residue in water, soil, and fruits by modified quick, easy, cheap, effective, rugged, and safe method coupled to gas chromatography/tandem mass spectrometry.

    PubMed

    Li, Minmin; Liu, Xingang; Dong, Fengshou; Xu, Jun; Qin, Dongmei; Zheng, Yongquan

    2012-10-01

    A new, highly sensitive, and selective method was developed for the determination of the cyflumetofen residue in water, soil, and fruits by using gas chromatography quadruple mass spectrometry. The target compound was extracted using acetonitrile and then cleaned up using dispersive solid-phase extraction with primary and secondary amine and graphitized carbon black, and optionally by a freezing-out cleanup step. The matrix-matched standards gave satisfactory recoveries and relative standard deviation values in different matrices at three fortified levels (0.05, 0.5, and 1.0 mg kg(-1) ). The overall average recoveries for this method in water, soil, and all fruits matrix at three fortified levels ranged from 76.3 to 101.5% with relative standard deviations in the range of 1.2-11.8% (n = 5). The calculated limits of detection and quantification were typically below 0.005 and 0.015 μg kg(-1), which were much lower than the maximum residue levels established by Japanese Positive List. This study provides a theoretical basis for China to draw up maximum residue level and analytical method for cyflumetofen acaricide in different fruits. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Long-term-average spectrum characteristics of Kunqu Opera singers' speaking, singing and stage speech.

    PubMed

    Dong, Li; Kong, Jiangping; Sundberg, Johan

    2014-07-01

    Long-term-average spectrum (LTAS) characteristics were analyzed for ten Kunqu Opera singers, two in each of five roles. Each singer performed singing, stage speech, and conversational speech. Differences between the roles and between their performances of these three conditions are examined. After compensating for Leq difference LTAS characteristics still differ between the roles but are similar for the three conditions, especially for Colorful face (CF) and Old man roles, and especially between reading and singing. The curves show no evidence of a singer's formant cluster peak, but the CF role demonstrates a speaker's formant peak near 3 kHz. The LTAS characteristics deviate markedly from non-singers' standard conversational speech as well as from those of Western opera singing.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakravarti, D.; Held, E.E.

    Radiocesium and stable potassium levels were determined in samples of muscle tissue of Birgus latro, the coconut crab, collected at Rongelap Atoll, Marshall Islands, during March and August 1958 and March 1959, and at Utirik Atoll in March 1959. Levels of cesium-137 ranged betwoen 731 d/m/g dry weight at Kabelle Island, Rongelap Atoll, and 28 d/m/g dry weight at Utirik Island, Utirik Atoll. The average potassium value for all samples was 13.05 mg/g dry weight with a standard deviation of 3.66. No significant correlation between cesium-l37 and potassium levels was found. There wse no significant difference in the average levelsmore » of cesium-137 in crabs collected at different times at the same island. (auth)« less

  14. Distribution and Kinematics of O VI in the Galactic Halo

    NASA Astrophysics Data System (ADS)

    Savage, B. D.; Sembach, K. R.; Wakker, B. P.; Richter, P.; Meade, M.; Jenkins, E. B.; Shull, J. M.; Moos, H. W.; Sonneborn, G.

    2003-05-01

    Far-Ultraviolet Spectroscopic Explorer (FUSE) spectra of 100 extragalactic objects and two distant halo stars are analyzed to obtain measures of O VI λλ1031.93, 1037.62 absorption along paths through the Milky Way thick disk/halo. Strong O VI absorption over the velocity range from -100 to 100 km s-1 reveals a widespread but highly irregular distribution of O VI, implying the existence of substantial amounts of hot gas with T~3×105 K in the Milky Way thick disk/halo. The integrated column density, log[N(O VI) cm-2], ranges from 13.85 to 14.78 with an average value of 14.38 and a standard deviation of 0.18. Large irregularities in the gas distribution are found to be similar over angular scales extending from <1° to 180°, implying a considerable amount of small- and large-scale structure in the absorbing gas. The overall distribution of O VI is not well described by a symmetrical plane-parallel layer of patchy O VI absorption. The simplest departure from such a model that provides a reasonable fit to the observations is a plane-parallel patchy absorbing layer with an average O VI midplane density of n0(O VI)=1.7×10-8 cm-3, a scale height of ~2.3 kpc, and a ~0.25 dex excess of O VI in the northern Galactic polar region. The distribution of O VI over the sky is poorly correlated with other tracers of gas in the halo, including low- and intermediate-velocity H I, Hα emission from the warm ionized gas at ~104 K, and hot X-ray-emitting gas at ~106 K. The O VI has an average velocity dispersion, b~60 km s-1, and standard deviation of 15 km s-1. Thermal broadening alone cannot explain the large observed profile widths. The average O VI absorption velocities toward high-latitude objects (|b|>45deg) range from -46 to 82 km s-1, with a high-latitude sample average of 0 km s-1 and a standard deviation of 21 km s-1. High positive velocity O VI absorbing wings extending from ~100 to ~250 km s-1 observed along 21 lines of sight may be tracing the flow of O VI into the halo. A combination of models involving the radiative cooling of hot fountain gas, the cooling of supernova bubbles in the halo, and the turbulent mixing of warm and hot halo gases is required to explain the presence of O VI and other highly ionized atoms found in the halo. The preferential venting of hot gas from local bubbles and superbubbles into the northern Galactic polar region may explain the enhancement of O VI in the north. If a fountain flow dominates, a mass flow rate of approximately 1.4 Msolar yr-1 of cooling hot gas to each side of the Galactic plane with an average density of 10-3 cm-3 is required to explain the average value of log[N(O VI)sin|b|] observed in the southern Galactic hemisphere. Such a flow rate is comparable to that estimated for the Galactic intermediate-velocity clouds.

  15. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  16. Inter- and intra-observer variation in soft-tissue sarcoma target definition.

    PubMed

    Roberge, D; Skamene, T; Turcotte, R E; Powell, T; Saran, N; Freeman, C

    2011-08-01

    To evaluate inter- and intra-observer variability in gross tumor volume definition for adult limb/trunk soft tissue sarcomas. Imaging studies of 15 patients previously treated with preoperative radiation were used in this study. Five physicians (radiation oncologists, orthopedic surgeons and a musculoskeletal radiologist) were asked to contour each of the 15 tumors on T1-weighted, gadolinium-enhanced magnetic resonance images. These contours were drawn twice by each physician. The volume and center of mass coordinates for each gross tumor volume were extracted and a Boolean analysis was performed to measure the degree of volume overlap. The median standard deviation in gross tumor volumes across observers was 6.1% of the average volume (range: 1.8%-24.9%). There was remarkably little variation in the 3D position of the gross tumor volume center of mass. For the 15 patients, the standard deviation of the 3D distance between centers of mass ranged from 0.06 mm to 1.7 mm (median 0.1mm). Boolean analysis demonstrated that 53% to 90% of the gross tumor volume was common to all observers (median overlap: 79%). The standard deviation in gross tumor volumes on repeat contouring was 4.8% (range: 0.1-14.4%) with a standard deviation change in the position of the center of mass of 0.4mm (range: 0mm-2.6mm) and a median overlap of 93% (range: 73%-98%). Although significant inter-observer differences were seen in gross tumor volume definition of adult soft-tissue sarcoma, the center of mass of these volumes was remarkably consistent. Variations in volume definition did not correlate with tumor size. Radiation oncologists should not hesitate to review their contours with a colleague (surgeon, radiologist or fellow radiation oncologist) to ensure that they are not outliers in sarcoma gross tumor volume definition. Protocols should take into account variations in volume definition when considering tighter clinical target volumes. Copyright © 2011 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  17. SU-E-T-364: Estimating the Minimum Number of Patients Required to Estimate the Required Planning Target Volume Margins for Prostate Glands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakhtiari, M; Schmitt, J; Sarfaraz, M

    2015-06-15

    Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less

  18. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...

  19. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  20. International Intercomparison of Specific Absorption Rates in a Flat Absorbing Phantom in the Near-Field of Dipole Antennas

    PubMed Central

    Davis, Christopher C.; Beard, Brian B.; Tillman, Ahlia; Rzasa, John; Merideth, Eric; Balzano, Quirino

    2018-01-01

    This paper reports the results of an international intercomparison of the specific absorption rates (SARs) measured in a flat-bottomed container (flat phantom), filled with human head tissue simulant fluid, placed in the near-field of custom-built dipole antennas operating at 900 and 1800 MHz, respectively. These tests of the reliability of experimental SAR measurements have been conducted as part of a verification of the ways in which wireless phones are tested and certified for compliance with safety standards. The measurements are made using small electric-field probes scanned in the simulant fluid in the phantom to record the spatial SAR distribution. The intercomparison involved a standard flat phantom, antennas, power meters, and RF components being circulated among 15 different governmental and industrial laboratories. At the conclusion of each laboratory’s measurements, the following results were communicated to the coordinators: Spatial SAR scans at 900 and 1800 MHz and 1 and 10 g maximum spatial SAR averages for cubic volumes at 900 and 1800 MHz. The overall results, given as meanstandard deviation, are the following: at 900 MHz, 1 g average 7.850.76; 10 g average 5.160.45; at 1800 MHz, 1 g average 18.44 ± 1.65; 10 g average 10.14 ± 0.85, all measured in units of watt per kilogram, per watt of radiated power. PMID:29520117

  1. Relativistic MR–MP Energy Levels for L-shell Ions of Silicon

    DOE PAGES

    Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter

    2018-01-15

    Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less

  2. Relativistic MR–MP Energy Levels for L-shell Ions of Silicon

    NASA Astrophysics Data System (ADS)

    Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter

    2018-01-01

    Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si V to 0.04 eV in Si XII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.

  3. VizieR Online Data Catalog: Relativistic MR-MP energy levels for Si (Santana+, 2018)

    NASA Astrophysics Data System (ADS)

    Santana, J. A.; Lopez-Dauphin, N. A.; Beiersdorfer, P.

    2018-03-01

    Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi- Reference Moller-Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20eV in SiV to 0.04eV in SiXII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements. (1 data file).

  4. Relativistic MR–MP Energy Levels for L-shell Ions of Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter

    Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less

  5. Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.

    PubMed

    Abdullah, Ali M; Hussona, Salah El-dien

    2013-10-01

    Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.

  6. Electron heating at interplanetary shocks

    NASA Technical Reports Server (NTRS)

    Feldman, W. C.; Asbridge, J. R.; Bame, S. J.; Gosling, J. T.; Zwickl, R. D.

    1982-01-01

    Data for 41 forward interplanetary shocks show that the ratio of downstream to upstream electron temperatures, T/sub e/(d/u) is variable in the range between 1.0 (isothermal) and 3.0. On average, (T/sub e/(d/u) = 1.5 with a standard deviation, sigma e = 0.5. This ratio is less than the average ratio of proton temperatures across the same shocks, (T/sub p/(d/u)) = 3.3 with sigma p = 2.5 as well as the average ratio of electron temperatures across the Earth's bow shock. Individual samples of T/sub e/(d/u) and T/sub p/(d/u) appear to be weakly correlated with the number density ratio. However the amounts of electron and proton heating are well correlated with each other as well as with the bulk velocity difference across each shock. The stronger shocks appear to heat the protons relatively more efficiently than they heat the electrons.

  7. A better norm-referenced grading using the standard deviation criterion.

    PubMed

    Chan, Wing-shing

    2014-01-01

    The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.

  8. Personal Background Preparation Survey for early identification of nursing students at risk for attrition.

    PubMed

    Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C

    2009-11-01

    During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.

  9. Demonstration of the Gore Module for Passive Ground Water Sampling

    DTIC Science & Technology

    2014-06-01

    ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70

  10. Effects of PVA(Polyvinyl Alcohol) on Supercooling Phenomena of Water

    NASA Astrophysics Data System (ADS)

    Kumano, Hiroyuki; Saito, Akio; Okawa, Seiji; Takizawa, Hiroshi

    In this paper, effects of polymer additive on supercooling of water were investigated experimentally. Poly-vinyl alcohol (PVA) were used as the polymer, and the samples were prepared by dissolving PVA in ultra pure water. Concentration, degree of polymerization and saponification of PVA were varied as the experimental parameters. The sample was cooled, and the temperature at the instant when ice appears was measured. Since freezing of supercooled water is statistical phenomenon, many experiments were carried out and average degrees of supercooling were obtained for each experimental condition. As the result, it was found that PVA affects nucleation of supercooling and the degree of supercooling increases by adding the PVA. Especially, it is found that the average degree of supercooling increases and the standard deviation of average degree of supercooling decreases with increase of degree of saponification of PVA. However, the average degree of supercooling are independent of the degree of polymerization of PVA in the range of this study.

  11. Heritability of Intraindividual Mean and Variability of Positive and Negative Affect.

    PubMed

    Zheng, Yao; Plomin, Robert; von Stumm, Sophie

    2016-12-01

    Positive affect (e.g., attentiveness) and negative affect (e.g., upset) fluctuate over time. We examined genetic influences on interindividual differences in the day-to-day variability of affect (i.e., ups and downs) and in average affect over the duration of a month. Once a day, 17-year-old twins in the United Kingdom ( N = 447) rated their positive and negative affect online. The mean and standard deviation of each individual's daily ratings across the month were used as the measures of that individual's average affect and variability of affect. Analyses revealed that the average of negative affect was significantly heritable (.53), but the average of positive affect was not; instead, the latter showed significant shared environmental influences (.42). Fluctuations across the month were significantly heritable for both negative affect (.54) and positive affect (.34). The findings support the two-factor theory of affect, which posits that positive affect is more situational and negative affect is more dispositional.

  12. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    NASA Astrophysics Data System (ADS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  13. Heritability of Intraindividual Mean and Variability of Positive and Negative Affect

    PubMed Central

    Zheng, Yao; Plomin, Robert; von Stumm, Sophie

    2016-01-01

    Positive affect (e.g., attentiveness) and negative affect (e.g., upset) fluctuate over time. We examined genetic influences on interindividual differences in the day-to-day variability of affect (i.e., ups and downs) and in average affect over the duration of a month. Once a day, 17-year-old twins in the United Kingdom (N = 447) rated their positive and negative affect online. The mean and standard deviation of each individual’s daily ratings across the month were used as the measures of that individual’s average affect and variability of affect. Analyses revealed that the average of negative affect was significantly heritable (.53), but the average of positive affect was not; instead, the latter showed significant shared environmental influences (.42). Fluctuations across the month were significantly heritable for both negative affect (.54) and positive affect (.34). The findings support the two-factor theory of affect, which posits that positive affect is more situational and negative affect is more dispositional. PMID:27729566

  14. Impact of baseline systolic blood pressure on visit-to-visit blood pressure variability: the Kailuan study.

    PubMed

    Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling

    2016-01-01

    To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.

  15. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    PubMed

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.

  16. Multichannel silicon WDM ring filters fabricated with DUV lithography

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Moo; Park, Sahnggi; Kim, Gyungock

    2008-09-01

    We have fabricated 9-channel silicon wavelength-division-multiplexing (WDM) ring filters using 193 nm deep-ultraviolet (DUV) lithography and investigated the spectral properties of the ring filters by comparing the transmission spectra with and without an upper cladding. The average channel-spacing of the 9-channel WDM ring filter with a polymeric upper cladding is measured about 1.86 nm with the standard deviation of the channel-spacing about 0.34 nm. The channel crosstalk is about -30 dB, and the minimal drop loss is about 2 dB.

  17. Top down arsenic uncertainty measurement in water and sediments from Guarapiranga dam (Brazil)

    NASA Astrophysics Data System (ADS)

    Faustino, M. G.; Lange, C. N.; Monteiro, L. R.; Furusawa, H. A.; Marques, J. R.; Stellato, T. B.; Soares, S. M. V.; da Silva, T. B. S. C.; da Silva, D. B.; Cotrim, M. E. B.; Pires, M. A. F.

    2018-03-01

    Total arsenic measurements assessment regarding legal threshold demands more than average and standard deviation approach. In this way, analytical measurement uncertainty evaluation was conducted in order to comply with legal requirements and to allow the balance of arsenic in both water and sediment compartments. A top-down approach for measurement uncertainties was applied to evaluate arsenic concentrations in water and sediments from Guarapiranga dam (São Paulo, Brazil). Laboratory quality control and arsenic interlaboratory tests data were used in this approach to estimate the uncertainties associated with the methodology.

  18. Engineering Design Handbook. Maintainability Engineering Theory and Practice

    DTIC Science & Technology

    1976-01-01

    5—46 5—8.4.1.1 Human Body Measurement ( Anthropometry ) . 5—46 5-8.4.1.2 Man’s Sensory Capability and Psychological Makeup 5-46 5—8.4.1.3...Availability of System With Maintenance Time Ratio 1:4 2-32 2—9 Average and Pointwise Availability 2—34 2—10 Hypothetical...density function ( pdf ) of the normal distribution (Ref. 22, Chapter 10, and Ref. 23, Chapter 1) has the equation where cr is the standard deviation of

  19. Dielectric Spectroscopy of Human Blood

    NASA Astrophysics Data System (ADS)

    Bernal-Alvarado, J.; Sosa, M.; Morales, L.; Hernández, L. C.; Hernández-Cabrera, F.; Palomares, P.; Juárez, P.; Ramírez, R.

    2003-09-01

    Using reactive strips of the Bayer's portable glucometer, as a container, the electric impedance spectrum of human blood was obtained. The results were fitted using the distributed element of the Cole-Cole model and the corresponding parameters were obtained. Several samples were studied and the result for the electric parameters, of the equivalent circuit, are reported -average value and standard deviation-. The samples were obtained from donors at the Guanajuato State Transfusion Center, at México; people were adult individuals in an aleatory sampling from healthy donors, they were free of hepatitis, and other diseases.

  20. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  1. Estimation of the neural drive to the muscle from surface electromyograms

    NASA Astrophysics Data System (ADS)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  2. Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.

    PubMed

    Bowman, Richard G; Caraway, David; Bentley, Ishmael

    2013-01-01

    Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.

  3. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  4. Prediction of moisture variation during composting process: A comparison of mathematical models.

    PubMed

    Wang, Yongjiang; Ai, Ping; Cao, Hongliang; Liu, Zhigang

    2015-10-01

    This study was carried out to develop and compare three models for simulating the moisture content during composting. Model 1 described changes in water content using mass balance, while Model 2 introduced a liquid-gas transferred water term. Model 3 predicted changes in moisture content without complex degradation kinetics. Average deviations for Model 1-3 were 8.909, 7.422 and 5.374 kg m(-3) while standard deviations were 10.299, 8.374 and 6.095, respectively. The results showed that Model 1 is complex and involves more state variables, but can be used to reveal the effect of humidity on moisture content. Model 2 tested the hypothesis of liquid-gas transfer and was shown to be capable of predicting moisture content during composting. Model 3 could predict water content well without considering degradation kinetics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Segmentation of Natural Gas Customers in Industrial Sector Using Self-Organizing Map (SOM) Method

    NASA Astrophysics Data System (ADS)

    Masbar Rus, A. M.; Pramudita, R.; Surjandari, I.

    2018-03-01

    The usage of the natural gas which is non-renewable energy, needs to be more efficient. Therefore, customer segmentation becomes necessary to set up a marketing strategy to be right on target or to determine an appropriate fee. This research was conducted at PT PGN using one of data mining method, i.e. Self-Organizing Map (SOM). The clustering process is based on the characteristic of its customers as a reference to create the customer segmentation of natural gas customers. The input variables of this research are variable of area, type of customer, the industrial sector, the average usage, standard deviation of the usage, and the total deviation. As a result, 37 cluster and 9 segment from 838 customer data are formed. These 9 segments then employed to illustrate the general characteristic of the natural gas customer of PT PGN.

  6. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  7. Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2006-01-01

    A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.

  8. Articular Cartilage: Evaluation with Fluid-suppressed 7.0-T Sodium MR Imaging in Subjects with and Subjects without Osteoarthritis

    PubMed Central

    Babb, James; Xia, Ding; Chang, Gregory; Krasnokutsky, Svetlana; Abramson, Steven B.; Jerschow, Alexej; Regatte, Ravinder R.

    2013-01-01

    Purpose: To assess the potential use of sodium magnetic resonance (MR) imaging of cartilage, with and without fluid suppression by using an adiabatic pulse, for classifying subjects with versus subjects without osteoarthritis at 7.0 T. Materials and Methods: The study was approved by the institutional review board and was compliant with HIPAA. The knee cartilage of 19 asymptomatic (control subjects) and 28 symptomatic (osteoarthritis patients) subjects underwent 7.0-T sodium MR imaging with use of two different sequences: one without fluid suppression (radial three-dimensional sequence) and one with fluid suppression (inversion recovery [IR] wideband uniform rate and smooth truncation [WURST]). Fluid suppression was obtained by using IR with an adiabatic inversion pulse (WURST pulse). Mean sodium concentrations and their standard deviations were measured in the patellar, femorotibial medial, and lateral cartilage regions over four consecutive sections for each subject. The minimum, maximum, median, and average means and standard deviations were calculated over all measurements for each subject. The utility of these measures in the detection of osteoarthritis was evaluated by using logistic regression and the area under the receiver operating characteristic curve (AUC). Bonferroni correction was applied to the P values obtained with logistic regression. Results: Measurements from IR WURST were found to be significant predicators of all osteoarthritis (Kellgren-Lawrence score of 1–4) and early osteoarthritis (Kellgren-Lawrence score of 1 or 2). The minimum standard deviation provided the highest AUC (0.83) with the highest accuracy (>78%), sensitivity (>82%), and specificity (>74%) for both all osteoarthritis and early osteoarthritis groups. Conclusion: Quantitative sodium MR imaging at 7.0 T with fluid suppression by using adiabatic IR is a potential biomarker for osteoarthritis. © RSNA, 2013 PMID:23468572

  9. Association between air pollution and acute childhood wheezy episodes: prospective observational study.

    PubMed Central

    Buchdahl, R.; Parker, A.; Stebbings, T.; Babiker, A.

    1996-01-01

    OBJECTIVE--To examine the association between the air pollutants ozone, sulphur dioxide, and nitrogen dioxide and the incidence of acute childhood wheezy episodes. DESIGN--Prospective observational study over one year. SETTING--District general hospital. SUBJECTS--1025 children attending the accident and emergency department with acute wheezy episodes; 4285 children with other conditions as the control group. MAIN OUTCOME MEASURES--Daily incidence of acute wheezy episodes. RESULTS--After seasonal adjustment, day to day variations in daily average concentrations of ozone and sulphur dioxide were found to have significant associations with the incidence of acute wheezy episodes. The strongest association was with ozone, for which a non-linear U shaped relation was seen. In terms of the incidence rate ratio (1 at a mean 24 hour ozone concentration of 40 microg/m3 (SD=19.1)), children were more likely to attend when the concentration was two standard deviations below the mean (incidence rate ratio=3.01; 95% confidence interval 2.17 to 4.18) or two standard deviations above the mean (1.34; 1.09 to 1.66). Sulphur dioxide had a weaker log-linear relation with incidence (1.12; 1.05 to 1.19 for each standard deviation (14.1) increase in sulphur dioxide concentration). Further adjustment for temperature and wind speed did not significantly alter these associations. CONCLUSIONS--Independent of season, temperature, and wind speed, fluctuations in concentrations of atmospheric ozone and sulphur dioxide are strongly associated with patterns of attendance at accident and emergency departments for acute childhood wheezy episodes. A critical ozone concentration seems to exist in the atmosphere above or below which children are more likely to develop symptoms. PMID:8597731

  10. Variations in cause and management of atrial fibrillation in a prospective registry of 15,400 emergency department patients in 46 countries: the RE-LY Atrial Fibrillation Registry.

    PubMed

    Oldgren, Jonas; Healey, Jeff S; Ezekowitz, Michael; Commerford, Patrick; Avezum, Alvaro; Pais, Prem; Zhu, Jun; Jansky, Petr; Sigamani, Alben; Morillo, Carlos A; Liu, Lisheng; Damasceno, Albertino; Grinvalds, Alex; Nakamya, Juliet; Reilly, Paul A; Keltai, Katalin; Van Gelder, Isabelle C; Yusufali, Afzal Hussein; Watanabe, Eiichi; Wallentin, Lars; Connolly, Stuart J; Yusuf, Salim

    2014-04-15

    Atrial fibrillation (AF) is the most common sustained arrhythmia; however, little is known about patients in a primary care setting from high-, middle-, and low-income countries. This prospective registry enrolled patients presenting to an emergency department with AF at 164 sites in 46 countries representing all inhabited continents. Patient characteristics were compared among 9 major geographic regions. Between September 2008 and April 2011, 15,400 patients were enrolled. The average age was 65.9, standard deviation 14.8 years, ranging from 57.2, standard deviation 18.8 years in Africa, to 70.1, standard deviation 13.4 years in North America, P<0.001. Hypertension was globally the most common risk factor for AF, ranging in prevalence from 41.6% in India to 80.7% in Eastern Europe, P<0.001. Rheumatic heart disease was present in only 2.2% of North American patients, in comparison with 21.5% in Africa and 31.5% in India, P<0.001. The use of oral anticoagulation among patients with a CHADS2 score of ≥2 was greatest in North America (65.7%) but was only 11.2% in China, P<0.001. The mean time in the therapeutic range was 62.4% in Western Europe, 50.9% in North America, but only between 32% and 40% in India, China, Southeast Asia, and Africa, P<0.001. There is a large global variation in age, risk factors, concomitant diseases, and treatment of AF among regions. Improving outcomes globally requires an understanding of this variation and the conduct of research focused on AF associated with different underlying conditions and treatment of AF and predisposing conditions in different socioeconomic settings.

  11. Mercury Human Exposure in Populations Living Around Lake Tana (Ethiopia).

    PubMed

    Habiba, G; Abebe, G; Bravo, Andrea G; Ermias, D; Staffan, Ǻ; Bishop, K

    2017-02-01

    A survey carried out in Lake Tana in 2015 found that Hg levels in some fish species exceeded internationally accepted safe levels for fish consumption. The current study assesses human exposure to Hg through fish consumption around the Lake Tana. Of particular interest was that a dietary intake of fishes is currently a health risk for Bihar Dar residents and anglers. Hair samples were collected from three different groups: anglers, college students and teachers, and daily laborers. A questionary includes gender, age, weight, activity. Frequency of fish consumption and origin of the eaten fish were completed by each participant. Mercury concentrations in hair were significantly higher (P value <0.05) for anglers (mean ± standard deviation 0.120 ± 0.199 μg/g) than college students (mean ± standard deviation 0.018 ± 0.039 μg/g) or daily workers (mean ± standard deviation 16 ± 9.5 ng/g). Anglers consumed fish more often than daily workers and college group. Moreover, there was also a strong correlation (P value <0.05) between the logarithms of total mercury and age associated with mercury concentration in scalp hair. Mercury concentrations in the hair of men were on average twice the value of the women. Also, users of skin lightening soap on a daily basis had 2.5 times greater mercury in scalp hair than non-users. Despite the different sources of mercury exposure mentioned above, the mercury concentrations of the scalp hair of participants of this study were below levels deemed to pose a threat to health.

  12. Relationships between junction temperature, electroluminescence spectrum and ageing of light-emitting diodes

    NASA Astrophysics Data System (ADS)

    Vaskuri, Anna; Kärhä, Petri; Baumgartner, Hans; Kantamaa, Olli; Pulli, Tomi; Poikonen, Tuomas; Ikonen, Erkki

    2018-04-01

    We have developed spectral models describing the electroluminescence spectra of AlGaInP and InGaN light-emitting diodes (LEDs) consisting of the Maxwell-Boltzmann distribution and the effective joint density of states. One spectrum at a known temperature for one LED specimen is needed for calibrating the model parameters of each LED type. Then, the model can be used for determining the junction temperature optically from the spectral measurement, because the junction temperature is one of the free parameters. We validated the models using, in total, 53 spectra of three red AlGaInP LED specimens and 72 spectra of three blue InGaN LED specimens measured at various current levels and temperatures between 303 K and 398 K. For all the spectra of red LEDs, the standard deviation between the modelled and measured junction temperatures was only 2.4 K. InGaN LEDs have a more complex effective joint density of states. For the blue LEDs, the corresponding standard deviation was 11.2 K, but it decreased to 3.5 K when each LED specimen was calibrated separately. The method of determining junction temperature was further tested on white InGaN LEDs with luminophore coating and LED lamps. The average standard deviation was 8 K for white InGaN LED types. We have six years of ageing data available for a set of LED lamps and we estimated the junction temperatures of these lamps with respect to their ageing times. It was found that the LEDs operating at higher junction temperatures were frequently more damaged.

  13. Variability and rapid increase in body mass index during childhood are associated with adult obesity.

    PubMed

    Li, Shengxu; Chen, Wei; Sun, Dianjianyi; Fernandez, Camilo; Li, Jian; Kelly, Tanika; He, Jiang; Krousel-Wood, Marie; Whelton, Paul K

    2015-12-01

    Body mass index (BMI) in childhood predicts obesity in adults, but it is unknown whether rapid increase and variability in BMI during childhood are independent predictors of adult obesity. The study cohort consisted of 1622 Bogalusa Heart Study participants (aged 20 to 51 years at follow-up) who had been screened at least four times during childhood (aged 4-19 years). BMI rate of change during childhood for each individual was assessed by mixed models; BMI residual standard deviation (RSD) during childhoodwas used as a measure of variability. The average follow-up period was 20.9 years. One standard deviation increase in rate of change in BMI during childhood was associated with 1.39 [95% confidence interval (CI): 1.17-1.61] kg/m(2) increase in adult BMI and 2.98 (95% CI: 2.42-3.56) cm increase in adult waist circumference, independently of childhood mean BMI. Similarly, one standard deviation increase in RSD in BMI during childhood was associated with 0.46 (95% CI: 0.23-0.69) kg/m(2) increase in adult BMI and 1.42 (95% CI: 0.82-2.02) cm increase in adult waist circumference. Odds ratio for adult obesity progressively increased from the lowest to the highest quartile of BMI rate of change or RSD during childhood (P for trend < 0.05 for both). Rapid increase and greater variability in BMI during childhood appear to be independent risk factors for adult obesity. Our findings have implications for understanding body weight regulation and obesity development from childhood to adulthood. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  14. Dual-Polarization Observations of Slowly Varying Solar Emissions from a Mobile X-Band Radar

    PubMed Central

    Gabella, Marco; Leuenberger, Andreas

    2017-01-01

    The radio noise that comes from the Sun has been reported in literature as a reference signal to check the quality of dual-polarization weather radar receivers for the S-band and C-band. In most cases, the focus was on relative calibration: horizontal and vertical polarizations were evaluated versus the reference signal mainly in terms of standard deviation of the difference. This means that the investigated radar receivers were able to reproduce the slowly varying component of the microwave signal emitted by the Sun. A novel method, aimed at the absolute calibration of dual-polarization receivers, has recently been presented and applied for the C-band. This method requires the antenna beam axis to be pointed towards the center of the Sun for less than a minute. Standard deviations of the difference as low as 0.1 dB have been found for the Swiss radars. As far as the absolute calibration is concerned, the average differences were of the order of −0.6 dB (after noise subtraction). The method has been implemented on a mobile, X-band radar, and this paper presents the successful results that were obtained during the 2016 field campaign in Payerne (Switzerland). Despite a relatively poor Sun-to-Noise ratio, the “small” (~0.4 dB) amplitude of the slowly varying emission was captured and reproduced; the standard deviation of the difference between the radar and the reference was ~0.2 dB. The absolute calibration of the vertical and horizontal receivers was satisfactory. After the noise subtraction and atmospheric correction a, the mean difference was close to 0 dB. PMID:28531164

  15. Dual-Polarization Observations of Slowly Varying Solar Emissions from a Mobile X-Band Radar.

    PubMed

    Gabella, Marco; Leuenberger, Andreas

    2017-05-22

    The radio noise that comes from the Sun has been reported in literature as a reference signal to check the quality of dual-polarization weather radar receivers for the S-band and C-band. In most cases, the focus was on relative calibration: horizontal and vertical polarizations were evaluated versus the reference signal mainly in terms of standard deviation of the difference. This means that the investigated radar receivers were able to reproduce the slowly varying component of the microwave signal emitted by the Sun. A novel method, aimed at the absolute calibration of dual-polarization receivers, has recently been presented and applied for the C-band. This method requires the antenna beam axis to be pointed towards the center of the Sun for less than a minute. Standard deviations of the difference as low as 0.1 dB have been found for the Swiss radars. As far as the absolute calibration is concerned, the average differences were of the order of -0.6 dB (after noise subtraction). The method has been implemented on a mobile, X-band radar, and this paper presents the successful results that were obtained during the 2016 field campaign in Payerne (Switzerland). Despite a relatively poor Sun-to-Noise ratio, the "small" (~0.4 dB) amplitude of the slowly varying emission was captured and reproduced; the standard deviation of the difference between the radar and the reference was ~0.2 dB. The absolute calibration of the vertical and horizontal receivers was satisfactory. After the noise subtraction and atmospheric correction a, the mean difference was close to 0 dB.

  16. Geographical Variations in the Environmental Determinants of Physical Inactivity among U.S. Adults.

    PubMed

    An, Ruopeng; Li, Xinye; Jiang, Ning

    2017-10-31

    Physical inactivity is a major modifiable risk factor for morbidity, disability and premature mortality worldwide. This study assessed the geographical variations in the impact of environmental quality on physical inactivity among U.S. adults. Data on county-level prevalence of leisure-time physical inactivity came from the Behavioral Risk Factor Surveillance System. County environment was measured by the Environmental Quality Index (EQI), a comprehensive index of environmental conditions that affect human health. The overall EQI consists of five subdomains-air, water, land, social, and built environment. Geographically weighted regressions (GWRs) were performed to estimate and map county-specific impact of overall EQI and its five subdomains on physical inactivity prevalence. The prevalence of leisure-time physical inactivity among U.S. counties was 25% in 2005. On average, one standard deviation decrease in the overall EQI was associated with an increase in county-level prevalence of leisure-time physical inactivity by nearly 1%. However, substantial geographical variations in the estimated environmental determinants of physical inactivity were present. The estimated changes of county-level prevalence of leisure-time physical inactivity resulted from one standard deviation decrease of the overall EQI ranged from an increase of over 3% to a decrease of nearly 2% across U.S. counties. Analogous, the estimated changes of county-level prevalence of leisure-time physical inactivity resulted from one standard deviation decrease of the EQI air, water, land, social, and built environment subdomains ranged from an increase of 2.6%, 1.5%, 2.9%, 3.3%, and 1.7% to a decrease of 2.9%, 1.4%, 2.4%, 2.4%, and 0.8% across U.S. counties, respectively. Given the substantial heterogeneities in the environmental determinants of physical inactivity, locally customized physical activity interventions are warranted to address the most concerning area-specific environmental issue.

  17. A computer aided treatment event recognition system in radiation therapy.

    PubMed

    Xia, Junyi; Mart, Christopher; Bayouth, John

    2014-01-01

    To develop an automated system to safeguard radiation therapy treatments by analyzing electronic treatment records and reporting treatment events. CATERS (Computer Aided Treatment Event Recognition System) was developed to detect treatment events by retrieving and analyzing electronic treatment records. CATERS is designed to make the treatment monitoring process more efficient by automating the search of the electronic record for possible deviations from physician's intention, such as logical inconsistencies as well as aberrant treatment parameters (e.g., beam energy, dose, table position, prescription change, treatment overrides, etc). Over a 5 month period (July 2012-November 2012), physicists were assisted by the CATERS software in conducting normal weekly chart checks with the aims of (a) determining the relative frequency of particular events in the authors' clinic and (b) incorporating these checks into the CATERS. During this study period, 491 patients were treated at the University of Iowa Hospitals and Clinics for a total of 7692 fractions. All treatment records from the 5 month analysis period were evaluated using all the checks incorporated into CATERS after the training period. About 553 events were detected as being exceptions, although none of them had significant dosimetric impact on patient treatments. These events included every known event type that was discovered during the trial period. A frequency analysis of the events showed that the top three types of detected events were couch position override (3.2%), extra cone beam imaging (1.85%), and significant couch position deviation (1.31%). The significant couch deviation is defined as the number of treatments where couch vertical exceeded two times standard deviation of all couch verticals, or couch lateral/longitudinal exceeded three times standard deviation of all couch laterals and longitudinals. On average, the application takes about 1 s per patient when executed on either a desktop computer or a mobile device. CATERS offers an effective tool to detect and report treatment events. Automation and rapid processing enables electronic record interrogation daily, alerting the medical physicist of deviations potentially days prior to performing weekly check. The output of CATERS could also be utilized as an important input to failure mode and effects analysis.

  18. Age differences in big five behavior averages and variabilities across the adult life span: moving beyond retrospective, global summary accounts of personality.

    PubMed

    Noftle, Erik E; Fleeson, William

    2010-03-01

    In 3 intensive cross-sectional studies, age differences in behavior averages and variabilities were examined. Three questions were posed: Does variability differ among age groups? Does the sizable variability in young adulthood persist throughout the life span? Do past conclusions about trait development, based on trait questionnaires, hold up when actual behavior is examined? Three groups participated: young adults (18-23 years), middle-aged adults (35-55 years), and older adults (65-81 years). In 2 experience-sampling studies, participants reported their current behavior multiple times per day for 1- or 2-week spans. In a 3rd study, participants interacted in standardized laboratory activities on 8 occasions. First, results revealed a sizable amount of intraindividual variability in behavior for all adult groups, with average within-person standard deviations ranging from about half a point to well over 1 point on 6-point scales. Second, older adults were most variable in Openness, whereas young adults were most variable in Agreeableness and Emotional Stability. Third, most specific patterns of maturation-related age differences in actual behavior were more greatly pronounced and differently patterned than those revealed by the trait questionnaire method. When participants interacted in standardized situations, personality differences between young adults and middle-aged adults were larger, and older adults exhibited a more positive personality profile than they exhibited in their everyday lives.

  19. Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan

    2018-03-01

    This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.

  20. Experimental and theoretical studies of the crystal structures of bis-isoxazole-bis-methylene dinitrate (BIDN) and bis-isoxazole tetramethylene tetranitrate (BITN) by x-ray crystallography and density functional theory

    NASA Astrophysics Data System (ADS)

    Taylor, Decarlos E.; Sausa, Rosario C.

    2018-06-01

    The determination of crystal structures plays an important role for model testing and validation, and understanding intra and intermolecular interactions that influence crystal packing. Here, we report the molecular structure of two recently synthesized energetic molecules, 3,3-bis-isoxazole-5,5‧-bis-methylene dinitrate (C8H6N4O8, BIDN) and bis-isoxazole tetramethylene tetranitrate (C10H8N6O14, BITN) determined by single crystal x-ray diffraction and solid state density functional theory (DFT). BIDN is composed of two planar alkyl nitrate groups (r.m.s deviation = 0.0004 (1) Å) bonded to two planar azole rings (r.m.s deviation = 0.001 (1) Å, whereas BITN is composed of four planar alkyl nitrate groups (average r.m.s deviation = 0.002 (1) Å) bonded to two planar azole rings (average r.m.s deviation = 0.002 (1) Å). The theoretical calculations predict very well the planarity of both the alkyl nitrate groups and rings for both compounds. Furthermore, they predict well the bond lengths and angles of both molecules with mean deviation values of 0.018 Å (BIDN) and 0.017 Å (BITN) and 0.481° (BIDN) and 0.747° (BITN). Overall, the DFT determined torsion angles agree well with those determined experimentally for both BIDN (average deviation = 1.139°) and BITN (average deviation = 0.604°). The theoretical cell constant values are in excellent agreement with those determined experimentally for both molecules, with the BIDN a cell value and β angle showing the largest deviation, 2.1% and -1.3%, respectively. Contacts between the atoms N and H dominate the intermolecular interactions of BIDN, whereas contacts involving the atoms O and H dominate the BITN intermolecular interactions. Electrostatic potential calculations at the B3LYP/6-31G* level reveal BIDN exhibits a lower sensitivity to impact compared to BITN.

  1. A study on the measurement of wrist motion range using the iPhone 4 gyroscope application.

    PubMed

    Kim, Tae Seob; Park, David Dae Hwan; Lee, Young Bae; Han, Dong Gil; Shim, Jeong Su; Lee, Young Jig; Kim, Peter Chan Woo

    2014-08-01

    Measuring the range of motion (ROM) of the wrist is an important physical examination conducted in the Department of Hand Surgery for the purpose of evaluation, diagnosis, prognosis, and treatment of patients. The most common method for performing this task is by using a universal goniometer. This study was performed using 52 healthy participants to compare wrist ROM measurement using a universal goniometer and the iPhone 4 Gyroscope application. Participants did not have previous wrist illnesses and their measured values for wrist motion were compared in each direction. Normal values for wrist ROM are 73 degrees of flexion, 71 degrees of extension, 19 degrees of radial deviation, 33 degrees of ulnar deviation, 140 degrees of supination, and 60 degrees of pronation.The average measurement values obtained using the goniometer were 74.2 (5.1) degrees for flexion, 71.1 (4.9) degrees for extension, 19.7 (3.0) degrees for radial deviation, 34.0 (3.7) degrees for ulnar deviation, 140.8 (5.6) degrees for supination, and 61.1 (4.7) degrees for pronation. The average measurement values obtained using the iPhone 4 Gyroscope application were 73.7 (5.5) degrees for flexion, 70.8 (5.1) degrees for extension, 19.5 (3.0) degrees for radial deviation, 33.7 (3.9) degrees for ulnar deviation, 140.4 (5.7) degrees for supination, and 60.8 (4.9) degrees for pronation. The differences between the measurement values by the Gyroscope application and average value were 0.7 degrees for flexion, -0.2 degrees for extension, 0.5 degrees for radial deviation, 0.7 degrees for ulnar deviation, 0.4 degrees for supination, and 0.8 degrees for pronation. The differences in average value were not statistically significant. The authors introduced a new method of measuring the range of wrist motion using the iPhone 4 Gyroscope application that is simpler to use and can be performed by the patient outside a clinical setting.

  2. Empirical Model of Precipitating Ion Oval

    NASA Astrophysics Data System (ADS)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  3. Automated lung volumetry from routine thoracic CT scans: how reliable is the result?

    PubMed

    Haas, Matthias; Hamm, Bernd; Niehues, Stefan M

    2014-05-01

    Today, lung volumes can be easily calculated from chest computed tomography (CT) scans. Modern postprocessing workstations allow automated volume measurement of data sets acquired. However, there are challenges in the use of lung volume as an indicator of pulmonary disease when it is obtained from routine CT. Intra-individual variation and methodologic aspects have to be considered. Our goal was to assess the reliability of volumetric measurements in routine CT lung scans. Forty adult cancer patients whose lungs were unaffected by the disease underwent routine chest CT scans in 3-month intervals, resulting in a total number of 302 chest CT scans. Lung volume was calculated by automatic volumetry software. On average of 7.2 CT scans were successfully evaluable per patient (range 2-15). Intra-individual changes were assessed. In the set of patients investigated, lung volume was approximately normally distributed, with a mean of 5283 cm(3) (standard deviation = 947 cm(3), skewness = -0.34, and curtosis = 0.16). Between different scans in one and the same patient the median intra-individual standard deviation in lung volume was 853 cm(3) (16% of the mean lung volume). Automatic lung segmentation of routine chest CT scans allows a technically stable estimation of lung volume. However, substantial intra-individual variations have to be considered. A median intra-individual deviation of 16% in lung volume between different routine scans was found. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  4. Rapid determination of tafenoquine in small volume human plasma samples by high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Doyle, E; Fowles, S E; Summerfield, S; White, T J

    2002-03-25

    A method was developed for the determination of tafenoquine (I) in human plasma using high-performance liquid chromatography-tandem mass spectrometry. Prior to analysis, the protein in plasma samples was precipitated with methanol containing [2H3(15N)]tafenoquine (II) to act as an internal standard. The supernatant was injected onto a Genesis-C18 column without any further clean-up. The mass spectrometer was operated in the positive ion mode, employing a heat assisted nebulisation, electrospray interface. Ions were detected in multiple reaction monitoring mode. The assay required 50 microl of plasma and was precise and accurate within the range 2 to 500 ng/ml. The average within-run and between-run relative standard deviations were < 7% at 2 ng/ml and greater concentrations. The average accuracy of validation standards was generally within +/- 4% of the nominal concentration. There was no evidence of instability of I in human plasma following three complete freeze-thaw cycles and samples can safely be stored for at least 8 months at approximately -70 degrees C. The method was very robust and has been successfully applied to the analysis of clinical samples from patients and healthy volunteers dosed with I.

  5. Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles

    NASA Astrophysics Data System (ADS)

    Kobayashi, Naoki; Yamazaki, Hiroshi

    2018-01-01

    We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.

  6. Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.

    PubMed

    Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R

    2016-11-01

    Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Determination of indium in geological materials by electrothermal-atomization atomic absorption spectrometry with a tungsten-impregnated graphite furance

    USGS Publications Warehouse

    Zhou, L.; Chao, T.T.; Meier, A.L.

    1984-01-01

    The sample is fused with lithium metaborate and the melt is dissolved in 15% (v/v) hydrobromic acid. Iron(III) is reduced with ascorbic acid to avoid its coextraction with indium as the bromide into methyl isobutyl ketone. Impregnation of the graphite furnace with sodium tungstate, and the presence of lithium metaborate and ascorbic acid in the reaction medium improve the sensitivity and precision. The limits of determination are 0.025-16 mg kg-1 indium in the sample. For 22 geological reference samples containing more than 0.1 mg kg-1 indium, relative standard deviations ranged from 3.0 to 8.5% (average 5.7%). Recoveries of indium added to various samples ranged from 96.7 to 105.6% (average 100.2%). ?? 1984.

  8. Method and apparatus for in-situ detection and isolation of aircraft engine faults

    NASA Technical Reports Server (NTRS)

    Bonanni, Pierino Gianni (Inventor); Brunell, Brent Jerome (Inventor)

    2007-01-01

    A method for performing a fault estimation based on residuals of detected signals includes determining an operating regime based on a plurality of parameters, extracting predetermined noise standard deviations of the residuals corresponding to the operating regime and scaling the residuals, calculating a magnitude of a measurement vector of the scaled residuals and comparing the magnitude to a decision threshold value, extracting an average, or mean direction and a fault level mapping for each of a plurality of fault types, based on the operating regime, calculating a projection of the measurement vector onto the average direction of each of the plurality of fault types, determining a fault type based on which projection is maximum, and mapping the projection to a continuous-valued fault level using a lookup table.

  9. Historical baselines of coral cover on tropical reefs as estimated by expert opinion

    PubMed Central

    Cheung, William W.L.; Bruno, John F.

    2018-01-01

    Coral reefs are important habitats that represent global marine biodiversity hotspots and provide important benefits to people in many tropical regions. However, coral reefs are becoming increasingly threatened by climate change, overfishing, habitat destruction, and pollution. Historical baselines of coral cover are important to understand how much coral cover has been lost, e.g., to avoid the ‘shifting baseline syndrome’. There are few quantitative observations of coral reef cover prior to the industrial revolution, and therefore baselines of coral reef cover are difficult to estimate. Here, we use expert and ocean-user opinion surveys to estimate baselines of global coral reef cover. The overall mean estimated baseline coral cover was 59% (±19% standard deviation), compared to an average of 58% (±18% standard deviation) estimated by professional scientists. We did not find evidence of the shifting baseline syndrome, whereby respondents who first observed coral reefs more recently report lower estimates of baseline coral cover. These estimates of historical coral reef baseline cover are important for scientists, policy makers, and managers to understand the extent to which coral reefs have become depleted and to set appropriate recovery targets. PMID:29379692

  10. Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.

    PubMed

    Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G

    2012-11-13

    The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.

  11. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    PubMed

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  12. Development of a Dual-Pump CARS System for Measurements in a Supersonic Combusting Free Jet

    NASA Technical Reports Server (NTRS)

    Magnotti, Gaetano; Cutler, Andrew D.; Danehy, Paul

    2012-01-01

    This work describes the development of a dual-pump CARS system for simultaneous measurements of temperature and absolute mole fraction of N2, O2 and H2 in a laboratory scale supersonic combusting free jet. Changes to the experimental set-up and the data analysis to improve the quality of the measurements in this turbulent, high-temperature reacting flow are described. The accuracy and precision of the instrument have been determined using data collected in a Hencken burner flame. For temperature above 800 K, errors in absolute mole fraction are within 1.5, 0.5, and 1% of the total composition for N2, O2 and H2, respectively. Estimated standard deviations based on 500 single shots are between 10 and 65 K for the temperature, between 0.5 and 1.7% of the total composition for O2, and between 1.5 and 3.4% for N2. The standard deviation of H2 is 10% of the average measured mole fraction. Results obtained in the jet with and without combustion are illustrated, and the capabilities and limitations of the dual-pump CARS instrument discussed.

  13. Historical baselines of coral cover on tropical reefs as estimated by expert opinion.

    PubMed

    Eddy, Tyler D; Cheung, William W L; Bruno, John F

    2018-01-01

    Coral reefs are important habitats that represent global marine biodiversity hotspots and provide important benefits to people in many tropical regions. However, coral reefs are becoming increasingly threatened by climate change, overfishing, habitat destruction, and pollution. Historical baselines of coral cover are important to understand how much coral cover has been lost, e.g., to avoid the 'shifting baseline syndrome'. There are few quantitative observations of coral reef cover prior to the industrial revolution, and therefore baselines of coral reef cover are difficult to estimate. Here, we use expert and ocean-user opinion surveys to estimate baselines of global coral reef cover. The overall mean estimated baseline coral cover was 59% (±19% standard deviation), compared to an average of 58% (±18% standard deviation) estimated by professional scientists. We did not find evidence of the shifting baseline syndrome, whereby respondents who first observed coral reefs more recently report lower estimates of baseline coral cover. These estimates of historical coral reef baseline cover are important for scientists, policy makers, and managers to understand the extent to which coral reefs have become depleted and to set appropriate recovery targets.

  14. Tracked ultrasound calibration studies with a phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.

  15. Antarctic Surface Temperatures Using Satellite Infrared Data from 1979 Through 1995

    NASA Technical Reports Server (NTRS)

    Comiso, Josefino C.; Stock, Larry

    1997-01-01

    The large scale spatial and temporal variations of surface ice temperature over the Antarctic region are studied using infrared data derived from the Nimbus-7 Temperature Humidity Infrared Radiometer (THIR) from 1979 through 1985 and from the NOAA Advanced Very High Resolution Radiometer (AVHRR) from 1984 through 1995. Enhanced techniques suitable for the polar regions for cloud masking and atmospheric correction were used before converting radiances to surface temperatures. The observed spatial distribution of surface temperature is highly correlated with surface ice sheet topography and agrees well with ice station temperatures with 2K to 4K standard deviations. The average surface ice temperature over the entire continent fluctuates by about 30K from summer to winter while that over the Antarctic Plateau varies by about 45K. Interannual fluctuations of the coldest interannual variations in surface temperature are highest at the Antarctic Plateau and the ice shelves (e.g., Ross and Ronne) with a periodic cycle of about 5 years and standard deviations of about 11K and 9K, respectively. Despite large temporal variability, however, especially in some regions, a regression analysis that includes removal of the seasonal cycle shows no apparent trend in temperature during the period 1979 through 1995.

  16. Model for threading dislocations in metamorphic tandem solar cells on GaAs (001) substrates

    NASA Astrophysics Data System (ADS)

    Song, Yifei; Kujofsa, Tedi; Ayers, John E.

    2018-02-01

    We present an approximate model for the threading dislocations in III-V heterostructures and have applied this model to study the defect behavior in metamorphic triple-junction solar cells. This model represents a new approach in which the coefficient for second-order threading dislocation annihilation and coalescence reactions is considered to be determined by the length of misfit dislocations, LMD, in the structure, and we therefore refer to it as the LMD model. On the basis of this model we have compared the average threading dislocation densities in the active layers of triple junction solar cells using linearly-graded buffers of varying thicknesses as well as S-graded (complementary error function) buffers with varying thicknesses and standard deviation parameters. We have shown that the threading dislocation densities in the active regions of metamorphic tandem solar cells depend not only on the thicknesses of the buffer layers but on their compositional grading profiles. The use of S-graded buffer layers instead of linear buffers resulted in lower threading dislocation densities. Moreover, the threading dislocation densities depended strongly on the standard deviation parameters used in the S-graded buffers, with smaller values providing lower threading dislocation densities.

  17. Statistical wind analysis for near-space applications

    NASA Astrophysics Data System (ADS)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  18. 40 CFR 90.708 - Cumulative Sum (CumSum) procedure.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...

  19. Anthropometric measurement standardization in the US-affiliated pacific: Report from the Children's Healthy Living Program.

    PubMed

    Li, Fenfang; Wilkens, Lynne R; Novotny, Rachel; Fialkowski, Marie K; Paulino, Yvette C; Nelson, Randall; Bersamin, Andrea; Martin, Ursula; Deenik, Jonathan; Boushey, Carol J

    2016-05-01

    Anthropometric standardization is essential to obtain reliable and comparable data from different geographical regions. The purpose of this study is to describe anthropometric standardization procedures and findings from the Children's Healthy Living (CHL) Program, a study on childhood obesity in 11 jurisdictions in the US-Affiliated Pacific Region, including Alaska and Hawai'i. Zerfas criteria were used to compare the measurement components (height, waist, and weight) between each trainee and a single expert anthropometrist. In addition, intra- and inter-rater technical error of measurement (TEM), coefficient of reliability, and average bias relative to the expert were computed. From September 2012 to December 2014, 79 trainees participated in at least 1 of 29 standardization sessions. A total of 49 trainees passed either standard or alternate Zerfas criteria and were qualified to assess all three measurements in the field. Standard Zerfas criteria were difficult to achieve: only 2 of 79 trainees passed at their first training session. Intra-rater TEM estimates for the 49 trainees compared well with the expert anthropometrist. Average biases were within acceptable limits of deviation from the expert. Coefficient of reliability was above 99% for all three anthropometric components. Standardization based on comparison with a single expert ensured the comparability of measurements from the 49 trainees who passed the criteria. The anthropometric standardization process and protocols followed by CHL resulted in 49 standardized field anthropometrists and have helped build capacity in the health workforce in the Pacific Region. Am. J. Hum. Biol. 28:364-371, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Acoustic Correlates of Compensatory Adjustments to the Glottic and Supraglottic Structures in Patients with Unilateral Vocal Fold Paralysis

    PubMed Central

    2015-01-01

    The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690

  1. Dynamics of the standard deviations of three wind velocity components from the data of acoustic sounding

    NASA Astrophysics Data System (ADS)

    Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.

    2017-11-01

    Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.

  2. A supersensitive silver nanoprobe based aptasensor for low cost detection of malathion residues in water and food samples

    NASA Astrophysics Data System (ADS)

    Bala, Rajni; Mittal, Sherry; Sharma, Rohit K.; Wangoo, Nishima

    2018-05-01

    In the present study, we report a highly sensitive, rapid and low cost colorimetric monitoring of malathion (an organophosphate insecticide) employing a basic hexapeptide, malathion specific aptamer (oligonucleotide) and silver nanoparticles (AgNPs) as a nanoprobe. AgNPs are made to interact with the aptamer and peptide to give different optical responses depending upon the presence or absence of malathion. The nanoparticles remain yellow in color in the absence of malathion owing to the binding of aptamer with peptide which otherwise tends to aggregate the particles because of charge based interactions. In the presence of malathion, the agglomeration of the particles occurs which turns the solution orange. Furthermore, the developed aptasensor was successfully applied to detect malathion in various water samples and apple. The detection offered high recoveries in the range of 89-120% with the relative standard deviation within 2.98-4.78%. The proposed methodology exhibited excellent selectivity and a very low limit of detection i.e. 0.5 pM was achieved. The developed facile, rapid and low cost silver nanoprobe based on aptamer and peptide proved to be potentially applicable for highly selective and sensitive colorimetric sensing of trace levels of malathion in complex environmental samples. Figure S2. HPLC Chromatogram of KKKRRR. Figure S3. UV- Visible spectra of AgNPs in the presence of increasing peptide concentrations. Inset shows respective color changes of AgNPs with peptide concentrations ranging from 0.1 mM to 100 mM (a to e). Figure S4. UV- Visible spectra of AgNPs in the presence 10 mM peptide and varying aptamer concentrations. Inset shows the corresponding color changes. a to e shows aptamer concentrations ranging from 10 nM to 1000 nM. Figure S5. Interference Studies. Ratio of A520 nm/390 nm of AgNPs in the presence of 10 mM peptide, 500 nM aptamer, 0.5 nM malathion and 0.5 mM interfering components i.e. sodium, potassium, calcium, alanine, arginine, aspartic acid, ascorbic acid (AA) and glucose. Figure S6. (A) Absorbance spectra of AgNPs with increasing malathion concentrations. (B) Calibration plot for spiked lake water. Inset shows their respective images where a to g represents malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation. Figure S7. (A) Absorbance spectra of AgNPs with increasing malathion concentrations in spiked tap water samples. (B) Calibration plot for the biosensor. Inset represents the color changes. a to g represents varying malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation. Figure S8. (A) Absorbance spectra of AgNPs in the presence of different malathion concentrations in spiked apple samples. (B) Calibration plot for spiked apple. Inset displays the corresponding color changes. a to g shows the color of solutions having malathion concentrations from 0.01 nM to 0.75 nM. Each point represents an average of three individual measurements and error bars indicate standard deviation.

  3. Evaluating the articulation index for auditory-visual input.

    PubMed

    Grant, K W; Braida, L D

    1991-06-01

    An investigation of the auditory-visual (AV) articulation index (AI) correction procedure outlined in the ANSI standard [ANSI S3.5-1969 (R1986)] was made by evaluating auditory (A), visual (V), and auditory-visual sentence identification for both wideband speech degraded by additive noise and a variety of bandpass-filtered speech conditions presented in quiet and in noise. When the data for each of the different listening conditions were averaged across talkers and subjects, the procedure outlined in the standard was fairly well supported, although deviations from the predicted AV score were noted for individual subjects as well as individual talkers. For filtered speech signals with AIA less than 0.25, there was a tendency for the standard to underpredict AV scores. Conversely, for signals with AIA greater than 0.25, the standard consistently overpredicted AV scores. Additionally, synergistic effects, where the AIA obtained from the combination of different bandpass-filtered conditions was greater than the sum of the individual AIA's, were observed for all nonadjacent filter-band combinations (e.g., the addition of a low-pass band with a 630-Hz cutoff and a high-pass band with a 3150-Hz cutoff). These latter deviations from the standard violate the basic assumption of additivity stated by Articulation Theory, but are consistent with earlier reports by Pollack [I. Pollack, J. Acoust. Soc. Am. 20, 259-266 (1948)], Licklider [J. C. R. Licklider, Psychology: A Study of a Science, Vol. 1, edited by S. Koch (McGraw-Hill, New York, 1959), pp. 41-144], and Kryter [K. D. Kryter, J. Acoust. Soc. Am. 32, 547-556 (1960)].

  4. Last Millennium ENSO-Mean State Interactions in the Tropical Pacific

    NASA Astrophysics Data System (ADS)

    Wyman, D. A.; Conroy, J. L.; Karamperidou, C.

    2017-12-01

    The nature and degree of interaction between the mean state of the tropical Pacific and ENSO remains an open question. Here we use high temporal resolution, tropical Pacific sea surface temperature (SST) records from the last millennium to investigate the relationship between ENSO and the tropical Pacific zonal sea surface temperature gradient (hereafter dSST). A dSST time series was created by standardizing, interpolating, and compositing 7 SST records from the western and 3 SST records from the eastern tropical Pacific. Propagating the age uncertainty of each of these records was accomplished through a Monte Carlo Empirical Orthogonal Function analysis. We find last millennium dSST is strong from 700 to 1300 CE, begins to weaken at approximately 1300 CE, and decreases more rapidly at 1700 CE. dSST was compared to 14 different ENSO reconstructions, independent of the records used to create dSST, to assess the nature of the ENSO-mean state relationship. dSST correlations with 50-year standard deviations of ENSO reconstructions are consistently negative, suggesting that more frequent, strong El Niño events on this timescale reduces dSST. To further assess the strength and direction of the ENSO-dSST relationship, moving 100-year standard deviations of ENSO reconstructions were compared to moving 100-year averages of dSST using Cohen's Kappa statistic, which measures categorical agreement. The Li et al. (2011) and Li et al. (2013) Nino 3.4 ENSO reconstructions had the highest agreement with dSST (k=0.80 and 0.70, respectively), with greater ENSO standard deviation coincident with periods of weak dSST. Other ENSO reconstructions showed weaker agreement with dSST, which may be partly due to low sample size. The consistent directional agreement of dSST with ENSO, coupled with the inability of strong ENSO events to develop under a weak SST gradient, suggests periods of more frequent strong El Niño events reduced tropical Pacific dSST on centennial timescales over the last millennium.

  5. High-resolution paleoclimatology of the Santa Barbara Basin during the Medieval Climate Anomaly and early Little Ice Age based on diatom and silicoflagellate assemblages in Kasten core SPR0901-02KC

    USGS Publications Warehouse

    Barron, John A.; Bukry, David B.; Hendy, Ingrid L.

    2015-01-01

    Diatom and silicoflagellate assemblages documented in a high-resolution time series spanning 800 to 1600 AD in varved sediment recovered in Kasten core SPR0901-02KC (34°16.845’ N, 120°02.332’ W, water depth 588 m) from the Santa Barbara Basin (SBB) reveal that SBB surface water conditions during the Medieval Climate Anomaly (MCA) and the early part of the Little Ice Age (LIA) were not extreme by modern standards, mostly falling within one standard deviation of mean conditions during the pre anthropogenic interval of 1748 to 1900. No clear differences between the character of MCA and the early LIA conditions are apparent. During intervals of extreme droughts identified by terrigenous proxy scanning XRF analyses, diatom and silicoflagellate proxies for coastal upwelling typically exceed one standard deviation above mean values for 1748-1900, supporting the hypothesis that droughts in southern California are associated with cooler (or La Niña-like) sea surface temperatures (SSTs). Increased percentages of diatoms transported downslope generally coincide with intervals of increased siliciclastic flux to the SBB identified by scanning XRF analyses. Diatom assemblages suggest only two intervals of the MCA (at ~897 to 922 and ~1151 to 1167) when proxy SSTs exceeded one standard deviation above mean values for 1748 to 1900. Conversely, silicoflagellates imply extreme warm water events only at ~830 to 860 (early MCA) and ~1360 to 1370 (early LIA) that are not supported by the diatom data. Silicoflagellates appear to be more suitable for characterizing average climate during the 5 to 11 year-long sample intervals studied in the SPR0901-02KC core than diatoms, probably because diatom relative abundances may be dominated by seasonal blooms of a particular year.

  6. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  7. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    NASA Technical Reports Server (NTRS)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  8. Teleconference versus Face-to-Face Scientific Peer Review of Grant Application: Effects on Review Outcomes

    PubMed Central

    Gallo, Stephen A.; Carpenter, Afton S.; Glisson, Scott R.

    2013-01-01

    Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects. PMID:23951223

  9. Magnetic reconnection during steady magnetospheric convection and other magnetospheric modes

    NASA Astrophysics Data System (ADS)

    Hubert, Benoit; Gérard, Jean-Claude; Milan, Steve E.; Cowley, Stanley W. H.

    2017-03-01

    We use remote sensing of the proton aurora with the IMAGE-FUV SI12 (Imager for Magnetopause to Aurora Global Exploration-Far Ultraviolet-Spectrographic Imaging at 121.8 nm) instrument and radar measurements of the ionospheric convection from the SuperDARN (Super Dual Aurora Radar Network) facility to estimate the open magnetic flux in the Earth's magnetosphere and the reconnection rates at the dayside magnetopause and in the magnetotail during intervals of steady magnetospheric convection (SMC). We find that SMC intervals occur with relatively high open magnetic flux (average ˜ 0.745 GWb, standard deviation ˜ 0.16 GWb), which is often found to be nearly steady, when the magnetic flux opening and closure rates approximately balance around 55 kV on average, with a standard deviation of 21 kV. We find that the residence timescale of open magnetic flux, defined as the ratio between the open magnetospheric flux and the flux closure rate, is roughly 4 h during SMCs. Interestingly, this number is approximately what can be deduced from the discussion of the length of the tail published by Dungey (1965), assuming a solar wind speed of ˜ 450 km s-1. We also infer an enhanced convection velocity in the tail, driving open magnetic flux to the nightside reconnection site. We compare our results with previously published studies in order to identify different magnetospheric modes. These are ordered by increasing open magnetic flux and reconnection rate as quiet conditions, SMCs, substorms (with an important overlap between these last two) and sawtooth intervals.

  10. Mars-GRAM Applications for Mars Science Laboratory Mission Site Selection Processes

    NASA Technical Reports Server (NTRS)

    Justh, Hilary; Justus, C. G.

    2007-01-01

    An overview is presented of the Mars-Global Reference Atmospheric Model (Mars-GRAM 2005) and its new features. One important new feature is the "auxiliary profile" option, whereby a simple input file is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Results are presented using auxiliary profiles produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) for three candidate Mars Science Laboratory (MSL) landing sites (Terby Crater, Melas Chasma, and Gale Crater). A global Thermal Emission Spectrometer (TES) database has also been generated for purposes of making 'Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude bins and 15 degree L(sub S) bins, for each of three Mars years of TES nadir data. Comparisons show reasonably good consistency between Mars-GRAM with low dust optical depth and both TES observed and mesoscale model simulated density at the three study sites. Mean winds differ by a more significant degree. Comparisons of mesoscale and TES standard deviations' with conventional Mars-GRAM values, show that Mars-GRAM density perturbations are somewhat conservative (larger than observed variability), while mesoscale-modeled wind variations are larger than Mars-GRAM model estimates. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  11. Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.

    2010-08-01

    Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less

  12. Ambulatory blood pressure profiles in familial dysautonomia.

    PubMed

    Goldberg, Lior; Bar-Aluma, Bat-El; Krauthammer, Alex; Efrati, Ori; Sharabi, Yehonatan

    2018-02-12

    Familial dysautonomia (FD) is a rare genetic disease that involves extreme blood pressure fluctuations secondary to afferent baroreflex failure. The diurnal blood pressure profile, including the average, variability, and day-night difference, may have implications for long-term end organ damage. The purpose of this study was to describe the circadian pattern of blood pressure in the FD population and relationships with renal and pulmonary function, use of medications, and overall disability. We analyzed 24-h ambulatory blood pressure monitoring recordings in 22 patients with FD. Information about medications, disease severity, renal function (estimated glomerular filtration, eGFR), pulmonary function (forced expiratory volume in 1 s, FEV1) and an index of blood pressure variability (standard deviation of systolic pressure) were analyzed. The mean (± SEM) 24-h blood pressure was 115 ± 5.6/72 ± 2.0 mmHg. The diurnal blood pressure variability was high (daytime systolic pressure standard deviation 22.4 ± 1.5 mmHg, nighttime 17.2 ± 1.6), with a high frequency of a non-dipping pattern (16 patients, 73%). eGFR, use of medications, FEV1, and disability scores were unrelated to the degree of blood pressure variability or to dipping status. This FD cohort had normal average 24-h blood pressure, fluctuating blood pressure, and a high frequency of non-dippers. Although there was evidence of renal dysfunction based on eGFR and proteinuria, the ABPM profile was unrelated to the measures of end organ dysfunction or to reported disability.

  13. A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.

    PubMed

    Rhiel, G Steven

    2007-02-01

    In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.

  14. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  15. The pathway to RCTs: how many roads are there? Examining the homogeneity of RCT justification.

    PubMed

    Chow, Jeffrey Tin Yu; Lam, Kevin; Naeem, Abdul; Akanda, Zarique Z; Si, Francie Fengqin; Hodge, William

    2017-02-02

    Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results help inform decisions concerning drug development and use, preventive therapies, and screening programs. However, the inputs that justify an RCT to be conducted have not been studied. We reviewed the MEDLINE and EMBASE databases across six specialties (Ophthalmology, Otorhinolaryngology (ENT), General Surgery, Psychiatry, Obstetrics-Gynecology (OB-GYN), and Internal Medicine) and randomly chose 25 RCTs from each specialty except for Otorhinolaryngology (20 studies) and Internal Medicine (28 studies). For each RCT, we recorded information relating to the justification for conducting RCTs such as average study size cited, number of studies cited, and types of studies cited. The justification varied widely both within and between specialties. For Ophthalmology and OB-GYN, the average study sizes cited were around 1100 patients, whereas they were around 500 patients for Psychiatry and General Surgery. Between specialties, the average number of studies cited ranged from around 4.5 for ENT to around 10 for Ophthalmology, but the standard deviations were large, indicating that there was even more discrepancy within each specialty. When standardizing by the sample size of the RCT, some of the discrepancies between and within specialties can be explained, but not all. On average, Ophthalmology papers cited review articles the most (2.96 studies per RCT) compared to less than 1.5 studies per RCT for all other specialties. The justifications for RCTs vary widely both within and between specialties, and the justification for conducting RCTs is not standardized.

  16. Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.

    PubMed

    Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O

    2009-04-01

    Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.

  17. Excellent reliability of the Hamilton Depression Rating Scale (HDRS-21) in Indonesia after training.

    PubMed

    Istriana, Erita; Kurnia, Ade; Weijers, Annelies; Hidayat, Teddy; Pinxten, Lucas; de Jong, Cor; Schellekens, Arnt

    2013-09-01

    The Hamilton Depression Rating Scale (HDRS) is the most widely used depression rating scale worldwide. Reliability of HDRS has been reported mainly from Western countries. The current study tested the reliability of HDRS ratings among psychiatric residents in Indonesia, before and after HDRS training. The hypotheses were that: (i) prior to the training reliability of HDRS ratings is poor; and (ii) HDRS training can improve reliability of HDRS ratings to excellent levels. Furthermore, we explored cultural validity at item level. Videotaped HDRS interviews were rated by 30 psychiatric residents before and after 1 day of HDRS training. Based on a gold standard rating, percentage correct ratings and deviation from the standard were calculated. Correct ratings increased from 83% to 99% at item level and from 70% to 100% for the total rating. The average deviation from the gold standard rating improved from 0.07 to 0.02 at item level and from 2.97 to 0.46 for the total rating. HDRS assessment by psychiatric trainees in Indonesia without prior training is unreliable. A short, evidence-based HDRS training improves reliability to near perfect levels. The outlined training program could serve as a template for HDRS trainings. HDRS items that may be less valid for assessment of depression severity in Indonesia are discussed. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  18. A 100g Mass Comparator with an Improved Readability and Measuring Environment

    NASA Astrophysics Data System (ADS)

    Ueki, Masaaki; Sun, Jian-Xin; Ueda, Kazunaga

    In order to achieve higher accuracy of the mass standard in the mass range equal to or less than 100g, it is necessary for a mass comparator in the range to have a relative sensitivity of the order of 1×10-9. For this purpose, a 111g capacity fully-automatic mass comparator has been renovated so that its readability is improved from 1 to 0.1µg. The mass comparator is also installed in an air-tight chamber originally developed by the NMIJ, so that it can be kept in stable environment, especially in the air of constant density. With these renovations, standard deviation of the mass comparisons is reduced and uncertainty of the air buoyancy corrections is lessened. This paper reports the features of the improved 100g mass comparator, the empirical method to evaluate its performance and the obtained results. As a result, the standard deviations of the mass difference measurements have been greatly improved to 0.22µg in the average with the chamber closed, compared with 0.97µg with the one open. The linearity of the comparator has been also verified by the mass difference measurements of weights at the six masses of 10, 20, 30, 50, 70 and 100g, and it confirms that the non-linearity errors of the comparator are within 0.28µg, showing good measuring performance.

  19. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  20. An activity index for geomagnetic paleosecular variation, excursions, and reversals

    NASA Astrophysics Data System (ADS)

    Panovska, S.; Constable, C. G.

    2017-04-01

    Magnetic indices provide quantitative measures of space weather phenomena that are widely used by researchers in geomagnetism. We introduce an index focused on the internally generated field that can be used to evaluate long term variations or climatology of modern and paleomagnetic secular variation, including geomagnetic excursions, polarity reversals, and changes in reversal rate. The paleosecular variation index, Pi, represents instantaneous or average deviation from a geocentric axial dipole field using normalized ratios of virtual geomagnetic pole colatitude and virtual dipole moment. The activity level of the index, σPi, provides a measure of field stability through the temporal standard deviation of Pi. Pi can be calculated on a global grid from geomagnetic field models to reveal large scale geographic variations in field structure. It can be determined for individual time series, or averaged at local, regional, and global scales to detect long term changes in geomagnetic activity, identify excursions, and transitional field behavior. For recent field models, Pi ranges from less than 0.05 to 0.30. Conventional definitions for geomagnetic excursions are characterized by Pi exceeding 0.5. Strong field intensities are associated with low Pi unless they are accompanied by large deviations from axial dipole field directions. σPi provides a measure of geomagnetic stability that is modulated by the level of PSV or frequency of excursional activity and reversal rate. We demonstrate uses of Pi for paleomagnetic observations and field models and show how it could be used to assess whether numerical simulations of the geodynamo exhibit Earth-like properties.

  1. SU-E-T-617: Plan Quality Estimation of Intensity-Modulated Radiotherapy Cases for Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koo, J; Yoon, M; Chung, W

    Purpose: To estimate the planning quality of intensity-modulated radiotherapy in lung cancer cases and to provide preliminary data for the development of a planning quality assurance algorithm. Methods: 42 IMRT plans previously used in cases of solitary lung cancers were collected. Organs in or near the thoracic cavity, such as lung (ipsilateral, contralateral), heart, liver, esophagus, cord and bronchus were considered as organs at risk (OARs) in this study. The coverage index (CVI), conformity index (CI), homogeneity index (HI), volume, irregularity (standard deviation of center-surface distance) were used to compare PTV dose characteristics. The effective uniform dose (EUD), V10Gy, andmore » V20Gy of the OARs were used to compare OAR dose characteristics. Results: Average CVI, CI, HI values were 0.9, 0.8, 0.1, respectively. CVI and CI had narrow Gaussian distribution curves without a singular value, but one case had a relatively high (0.25) HI because of location and irregular shape (Irregularity of 18.5 when average was 12.5) of PTV. EUDs tended to decrease as OAR-PTV distance increased and OAR-PTV overlap volume decreased. Conclusion: This work indicates the potential for significant plan quality deviation of similar lung cancer cases. Considering that this study were from a single department, differences in the treatment results for a given patient would be much more pronounced if multiple departments (and therefore more planners) were involved. Therefore, further examination of QA protocols is needed to reduce deviations in radiation treatment planning.« less

  2. vSDC: a method to improve early recognition in virtual screening when limited experimental resources are available.

    PubMed

    Chaput, Ludovic; Martinez-Sanz, Juan; Quiniou, Eric; Rigolet, Pascal; Saettel, Nicolas; Mouawad, Liliane

    2016-01-01

    In drug design, one may be confronted to the problem of finding hits for targets for which no small inhibiting molecules are known and only low-throughput experiments are available (like ITC or NMR studies), two common difficulties encountered in a typical academic setting. Using a virtual screening strategy like docking can alleviate some of the problems and save a considerable amount of time by selecting only top-ranking molecules, but only if the method is very efficient, i.e. when a good proportion of actives are found in the 1-10 % best ranked molecules. The use of several programs (in our study, Gold, Surflex, FlexX and Glide were considered) shows a divergence of the results, which presents a difficulty in guiding the experiments. To overcome this divergence and increase the yield of the virtual screening, we created the standard deviation consensus (SDC) and variable SDC (vSDC) methods, consisting of the intersection of molecule sets from several virtual screening programs, based on the standard deviations of their ranking distributions. SDC allowed us to find hits for two new protein targets by testing only 9 and 11 small molecules from a chemical library of circa 15,000 compounds. Furthermore, vSDC, when applied to the 102 proteins of the DUD-E benchmarking database, succeeded in finding more hits than any of the four isolated programs for 13-60 % of the targets. In addition, when only 10 molecules of each of the 102 chemical libraries were considered, vSDC performed better in the number of hits found, with an improvement of 6-24 % over the 10 best-ranked molecules given by the individual docking programs.Graphical abstractIn drug design, for a given target and a given chemical library, the results obtained with different virtual screening programs are divergent. So how to rationally guide the experimental tests, especially when only a few number of experiments can be made? The variable Standard Deviation Consensus (vSDC) method was developed to answer this issue. Left panel the vSDC principle consists of intersecting molecule sets, chosen on the basis of the standard deviations of their ranking distributions, obtained from various virtual screening programs. In this study Glide, Gold, FlexX and Surflex were used and tested on the 102 targets of the DUD-E database. Right panel Comparison of the average percentage of hits found with vSDC and each of the four programs, when only 10 molecules from each of the 102 chemical libraries of the DUD-E database were considered. On average, vSDC was capable of finding 38 % of the findable hits, against 34 % for Glide, 32 % for Gold, 16 % for FlexX and 14 % for Surflex, showing that with vSDC, it was possible to overcome the unpredictability of the virtual screening results and to improve them.

  3. The correlation between relatives on the supposition of genomic imprinting.

    PubMed Central

    Spencer, Hamish G

    2002-01-01

    Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254

  4. The correlation between relatives on the supposition of genomic imprinting.

    PubMed

    Spencer, Hamish G

    2002-05-01

    Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.

  5. Regular extra curricular sports practice does not prevent moderate or severe variations in self-esteem or trait anxiety in early adolescents.

    PubMed

    Binsinger, Caroline; Laure, Patrick; Ambard, Marie-France

    2006-01-01

    Physical activity is often presented as an effective tool to improve self-esteem and/or to reduce anxiety. The aim of this study was to measure the influence of a regular extra curricular sports practice on self-esteem and anxiety. We conducted a prospective cohort study, which has included all of the pupils entering the first year of secondary school (sixth grade) in the Vosges Department (east France) during the school year 2001-2002 and followed during three years. Data were collected every six months by self-reported questionnaires. 1791 pupils were present at each of the six data collection sessions and completed all the questionnaires, representing 10,746 documents: 835 boys (46.6 %) and 956 girls (53.4 %), in November 2001, the average age was 11.1 ± 0.5 years (mean ± standard deviation). 722 pupils (40.3 %) reported that they had practiced an extra-school physical activity in a sporting association from November 2001 to May 2004 (ECS group), whereas, 195 (10.9 %) pupils had not practiced any extra-school physical activity at all (NECS group). The average global scores of self-esteem (Rosenberg's Scale) and trait anxiety (Spielberger's Scale) of the ECS pupils were, respectively, higher and lower than those of the NECS group. However, the incidence density (number of new cases during a given period / total person-time of observation) of moderate or severe decrease of self-esteem (less than "mean - one standard deviation "or less than "mean - two standard deviations") was not significantly different between the two groups, a finding that was also evident also in the case of trait anxiety. Finally, among ECS pupils, the incidence density of severe decrease of self-esteem was lower at the girls'. Practitioners and physical education teachers, as well as parents, should be encouraged to seek out ways to involve pupils in extra-school physical activities. Key PointsA regular extra-curricular sports practice is associated to better levels of self-esteem and trait anxiety among young adolescent.This activity seems to protect girls from severe variations of self-esteem.Boys do not seem to be protected from moderate or severe variations, neither of self-esteem, nor of trait anxiety, by a regular extracurricular sport practice.

  6. Investigation of sequential properties of snoring episodes for obstructive sleep apnoea identification.

    PubMed

    Cavusoglu, M; Ciloglu, T; Serinagaoglu, Y; Kamasak, M; Erogul, O; Akcam, T

    2008-08-01

    In this paper, 'snore regularity' is studied in terms of the variations of snoring sound episode durations, separations and average powers in simple snorers and in obstructive sleep apnoea (OSA) patients. The goal was to explore the possibility of distinguishing among simple snorers and OSA patients using only sleep sound recordings of individuals and to ultimately eliminate the need for spending a whole night in the clinic for polysomnographic recording. Sequences that contain snoring episode durations (SED), snoring episode separations (SES) and average snoring episode powers (SEP) were constructed from snoring sound recordings of 30 individuals (18 simple snorers and 12 OSA patients) who were also under polysomnographic recording in Gülhane Military Medical Academy Sleep Studies Laboratory (GMMA-SSL), Ankara, Turkey. Snore regularity is quantified in terms of mean, standard deviation and coefficient of variation values for the SED, SES and SEP sequences. In all three of these sequences, OSA patients' data displayed a higher variation than those of simple snorers. To exclude the effects of slow variations in the base-line of these sequences, new sequences that contain the coefficient of variation of the sample values in a 'short' signal frame, i.e., short time coefficient of variation (STCV) sequences, were defined. The mean, the standard deviation and the coefficient of variation values calculated from the STCV sequences displayed a stronger potential to distinguish among simple snorers and OSA patients than those obtained from the SED, SES and SEP sequences themselves. Spider charts were used to jointly visualize the three parameters, i.e., the mean, the standard deviation and the coefficient of variation values of the SED, SES and SEP sequences, and the corresponding STCV sequences as two-dimensional plots. Our observations showed that the statistical parameters obtained from the SED and SES sequences, and the corresponding STCV sequences, possessed a strong potential to distinguish among simple snorers and OSA patients, both marginally, i.e., when the parameters are examined individually, and jointly. The parameters obtained from the SEP sequences and the corresponding STCV sequences, on the other hand, did not have a strong discrimination capability. However, the joint behaviour of these parameters showed some potential to distinguish among simple snorers and OSA patients.

  7. Selection of the effect size for sample size determination for a continuous response in a superiority clinical trial using a hybrid classical and Bayesian procedure.

    PubMed

    Ciarleglio, Maria M; Arendt, Christopher D; Peduzzi, Peter N

    2016-06-01

    When designing studies that have a continuous outcome as the primary endpoint, the hypothesized effect size ([Formula: see text]), that is, the hypothesized difference in means ([Formula: see text]) relative to the assumed variability of the endpoint ([Formula: see text]), plays an important role in sample size and power calculations. Point estimates for [Formula: see text] and [Formula: see text] are often calculated using historical data. However, the uncertainty in these estimates is rarely addressed. This article presents a hybrid classical and Bayesian procedure that formally integrates prior information on the distributions of [Formula: see text] and [Formula: see text] into the study's power calculation. Conditional expected power, which averages the traditional power curve using the prior distributions of [Formula: see text] and [Formula: see text] as the averaging weight, is used, and the value of [Formula: see text] is found that equates the prespecified frequentist power ([Formula: see text]) and the conditional expected power of the trial. This hypothesized effect size is then used in traditional sample size calculations when determining sample size for the study. The value of [Formula: see text] found using this method may be expressed as a function of the prior means of [Formula: see text] and [Formula: see text], [Formula: see text], and their prior standard deviations, [Formula: see text]. We show that the "naïve" estimate of the effect size, that is, the ratio of prior means, should be down-weighted to account for the variability in the parameters. An example is presented for designing a placebo-controlled clinical trial testing the antidepressant effect of alprazolam as monotherapy for major depression. Through this method, we are able to formally integrate prior information on the uncertainty and variability of both the treatment effect and the common standard deviation into the design of the study while maintaining a frequentist framework for the final analysis. Solving for the effect size which the study has a high probability of correctly detecting based on the available prior information on the difference [Formula: see text] and the standard deviation [Formula: see text] provides a valuable, substantiated estimate that can form the basis for discussion about the study's feasibility during the design phase. © The Author(s) 2016.

  8. Determination of Small Animal Long Bone Properties Using Densitometry

    NASA Technical Reports Server (NTRS)

    Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)

    1996-01-01

    Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.

  9. Differences between dentitions with palatally and labially located maxillary canines observed in incisor width, dental morphology and space conditions.

    PubMed

    Artmann, L; Larsen, H J; Sørensen, H B; Christensen, I J; Kjaer, I

    2010-06-01

    To analyze the interrelationship between incisor width, deviations in the dentition and available space in the dental arch in palatally and labially located maxillary ectopic canine cases. Size: On dental casts from 69 patients (mean age 13 years 6 months) the mesiodistal widths of each premolar, canine and incisor were measured and compared with normal standards. Dental deviations: Based on panoramic radiographs from the same patients the dentitions were grouped accordingly: Group I: normal morphology; Group IIa: deviations in the dentition within the maxillary incisors only; Group IIb: deviations in the dentition in general. Descriptive statistics for the tooth sizes and dental deviations were presented by the mean and 95% confidence limits for the mean and the p-value for the T-statistic. Space: Space was expresses by subtracting the total tooth sizes of incisors, canines and premolars from the length of the arch segments. Size of lateral maxillary incisor: The widths of the lateral incisors were significantly different in groups I, IIa and IIb (p=0.016) and in cases with labially located ectopic canines on average 0.65 (95% CI:0.25-1.05, p=0.0019) broader than lateral incisors in cases with palatally located ectopic canines. Space: Least available space was observed in cases with labially located canines. The linear model did show a difference between palatally and labially located ectopic canines (p=0.03). Space related to deviations in the dentition: When space in the dental arch was related to dental deviations (groups I, IIa and IIb), the cases in group IIb with palatally located canines had significantly more space compared with I and IIa. Two subgroups of palatally located ectopic maxillary canine cases based on registration of space, incisor width and deviations in the morphology of the dentition were identified.

  10. 40 CFR 60.2780 - What must I include in the deviation report?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... from the emission limitations or operating limit requirements. (b) The averaged and recorded data for those dates. (c) Duration and causes of each deviation from the emission limitations or operating limits... deviated from the emission limitations or operating limits specified in this subpart, include the six items...

  11. Novel potentiometric sensors for the determination of the dinotefuran insecticide residue levels in cucumber and soil samples.

    PubMed

    Abdel-Ghany, Maha F; Hussein, Lobna A; El Azab, Noha F

    2017-03-01

    Five new potentiometric membrane sensors for the determination of the dinotefuran levels in cucumber and soil samples have been developed. Four of these sensors were based on a newly designed molecularly imprinted polymer (MIP) material consisting of acrylamide or methacrylic acid as a functional monomer in a plasticized PVC (polyvinyl chloride) membrane before and after elution of the template. A fifth sensor, a carboxylated PVC-based sensor plasticized with dioctyl phthalate, was also prepared and tested. Sensor 1 (acrylamide washed) and sensor 3 (methacrylic acid washed) exhibited significantly enhanced responses towards dinotefuran over the concentration range of 10 -7 -10 -2 molL -1 . The limit of detection (LOD) for both sensors was 0.35µgL -1 . The response was near-Nernstian, with average slopes of 66.3 and 50.8mV/decade for sensors 1 and 3 respectively. Sensors 2 (acrylamide non-washed), 4 (methacrylic acid non-washed) and 5 (carboxylated-PVC) exhibited non-Nernstian responses over the concentration range of 10 -7 -10 -3 molL -1 , with LODs of 10.07, 6.90, and 4.30µgL -1 , respectively, as well as average slopes of 39.1, 27.2 and 33mV/decade, respectively. The application of the proposed sensors to the determination of the dinotefuran levels in spiked soil and cucumber samples was demonstrated. The average recoveries from the cucumber samples were from 7.93% to 106.43%, with a standard deviation of less than 13.73%, and recoveries from soil samples were from 97.46% to 108.71%, with a standard deviation of less than 10.66%. The sensors were applied successfully to the determination of the dinotefuran residue, its rate of disappearance and its half-life in cucumbers in soil in which a safety pre-harvest interval for dinotefuran was suggested. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. In-vivo measurement of dynamic joint motion using high speed biplane radiography and CT: application to canine ACL deficiency.

    PubMed

    Tashman, Scott; Anderst, William

    2003-04-01

    Dynamic assessment of three-dimensional (3D) skeletal kinematics is essential for understanding normal joint function as well as the effects of injury or disease. This paper presents a novel technique for measuring in-vivo skeletal kinematics that combines data collected from high-speed biplane radiography and static computed tomography (CT). The goals of the present study were to demonstrate that highly precise measurements can be obtained during dynamic movement studies employing high frame-rate biplane video-radiography, to develop a method for expressing joint kinematics in an anatomically relevant coordinate system and to demonstrate the application of this technique by calculating canine tibio-femoral kinematics during dynamic motion. The method consists of four components: the generation and acquisition of high frame rate biplane radiographs, identification and 3D tracking of implanted bone markers, CT-based coordinate system determination, and kinematic analysis routines for determining joint motion in anatomically based coordinates. Results from dynamic tracking of markers inserted in a phantom object showed the system bias was insignificant (-0.02 mm). The average precision in tracking implanted markers in-vivo was 0.064 mm for the distance between markers and 0.31 degree for the angles between markers. Across-trial standard deviations for tibio-femoral translations were similar for all three motion directions, averaging 0.14 mm (range 0.08 to 0.20 mm). Variability in tibio-femoral rotations was more dependent on rotation axis, with across-trial standard deviations averaging 1.71 degrees for flexion/extension, 0.90 degree for internal/external rotation, and 0.40 degree for varus/valgus rotation. Advantages of this technique over traditional motion analysis methods include the elimination of skin motion artifacts, improved tracking precision and the ability to present results in a consistent anatomical reference frame.

  13. Does Assessment Type Matter? A Measurement Invariance Analysis of Online and Paper and Pencil Assessment of the Community Assessment of Psychic Experiences (CAPE)

    PubMed Central

    Vleeschouwer, Marloes; Schubart, Chris D.; Henquet, Cecile; Myin-Germeys, Inez; van Gastel, Willemijn A.; Hillegers, Manon H. J.; van Os, Jim J.; Boks, Marco P. M.; Derks, Eske M.

    2014-01-01

    Background The psychometric properties of an online test are not necessarily identical to its paper and pencil original. The aim of this study is to test whether the factor structure of the Community Assessment of Psychic Experiences (CAPE) is measurement invariant with respect to online vs. paper and pencil assessment. Method The factor structure of CAPE items assessed by paper and pencil (N = 796) was compared with the factor structure of CAPE items assessed by the Internet (N = 21,590) using formal tests for Measurement Invariance (MI). The effect size was calculated by estimating the Signed Item Difference in the Sample (SIDS) index and the Signed Test Difference in the Sample (STDS) for a hypothetical subject who scores 2 standard deviations above average on the latent dimensions. Results The more restricted Metric Invariance model showed a significantly worse fit compared to the less restricted Configural Invariance model (χ2(23) = 152.75, p<0.001). However, the SIDS indices appear to be small, with an average of −0.11. A STDS of −4.80 indicates that Internet sample members who score 2 standard deviations above average would be expected to score 4.80 points lower on the CAPE total scale (ranging from 42 to 114 points) than would members of the Paper sample with the same latent trait score. Conclusions Our findings did not support measurement invariance with respect to assessment method. Because of the small effect sizes, the measurement differences between the online assessed CAPE and its paper and pencil original can be neglected without major consequences for research purposes. However, a person with a high vulnerability for psychotic symptoms would score 4.80 points lower on the total scale if the CAPE is assessed online compared to paper and pencil assessment. Therefore, for clinical purposes, one should be cautious with online assessment of the CAPE. PMID:24465389

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFadden, Derek; Zhang Beibei; Brock, Kristy K.

    Purpose: Increasing the magnetic resonance imaging (MRI) field strength can improve image resolution and quality, but concerns remain regarding the influence on geometric fidelity. The objectives of the present study were to spatially investigate the effect of 3-Tesla (3T) MRI on clinical target localization for stereotactic radiosurgery. Methods and Materials: A total of 39 patients were enrolled in a research ethics board-approved prospective clinical trial. Imaging (1.5T and 3T MRI and computed tomography) was performed after stereotactic frame placement. Stereotactic target localization at 1.5T vs. 3T was retrospectively analyzed in a representative cohort of patients with tumor (n = 4)more » and functional (n = 5) radiosurgical targets. The spatial congruency of the tumor gross target volumes was determined by the mean discrepancy between the average gross target volume surfaces at 1.5T and 3T. Reproducibility was assessed by the displacement from an averaged surface and volume congruency. Spatial congruency and the reproducibility of functional radiosurgical targets was determined by comparing the mean and standard deviation of the isocenter coordinates. Results: Overall, the mean absolute discrepancy across all patients was 0.67 mm (95% confidence interval, 0.51-0.83), significantly <1 mm (p < .010). No differences were found in the overall interuser target volume congruence (mean, 84% for 1.5T vs. 84% for 3T, p > .4), and the gross target volume surface mean displacements were similar within and between users. The overall average isocenter coordinate discrepancy for the functional targets at 1.5T and 3T was 0.33 mm (95% confidence interval, 0.20-0.48), with no patient-specific differences between the mean values (p >.2) or standard deviations (p >.1). Conclusion: Our results have provided clinically relevant evidence supporting the spatial validity of 3T MRI for use in stereotactic radiosurgery under the imaging conditions used.« less

  15. Calculating net primary productivity of forest ecosystem with G4M model: case study on South Korea

    NASA Astrophysics Data System (ADS)

    Sung, S.; Forsell, N.; Kindermann, G.; Lee, D. K.

    2015-12-01

    Net primary productivity (NPP) is considered as an important indicator for forest ecosystem since the role of forest is highlighted as a stepping stone for mitigating climate change. Especially rapidly urbanizing countries which have high carbon dioxide emission have large interest in calculating forest NPP under climate change. Also maximizing carbon sequestration in forest sector has became a global goal to minimize the impacts of climate change. Therefore, the objective of this research is estimating carbon stock change under the different climate change scenarios by using G4M (Global Forestry Model) model in South Korea. We analyzed four climate change scenarios in different Representative Concentration Pathway (RCP). In this study we used higher resolution data (1kmx1km) to produce precise estimation on NPP from regionalized four climate change scenarios in G4M model. Finally, we set up other environmental variables for G4M such as water holding capacity, soil type and elevation. As a result of this study, temperature showed significant trend during 2011 to 2100. Average annual temperature increased more than 5℃ in RCP 8.5 scenario while 1℃ increased in RCP 2.6 scenario. Each standard deviation of the annual average temperature showed similar trend. Average annual precipitation showed similarity within four scenarios. However the standard deviation of average annual precipitation is higher in RCP8.5 scenario which indicates the ranges of precipitation is wider in RCP8.5 scenario. These results present that climate indicators such as temperature and precipitation have uncertainties in climate change scenarios. NPP has changed from 5-13tC/ha/year in RCP2.6 scenario to 9-21 tC/ha/year in RCP8.5 scenario in 2100. In addition the spatial distribution of NPP presented different trend among the scenarios. In conclusion we calculated differences in temperature and precipitation and NPP change in different climate change scenarios. This study can be applied for maximizing carbon seqestration of vegetation.

  16. Normative Data for a User-friendly Paradigm for Pattern Electroretinogram Recording

    PubMed Central

    Porciatti, Vittorio; Ventura, Lori M.

    2009-01-01

    Purpose To provide normative data for a user-friendly paradigm for the pattern electroretinogram (PERG) optimized for glaucoma screening (PERGLA). Design Prospective nonrandomized case series. Participants Ninety-three normal subjects ranging in age between 22 and 85 years. Methods A circular black–white grating of 25° visual angle, reversing 16.28 times per second, was presented on a television monitor placed inside a Ganzfeld bowl. The PERG was recorded simultaneously from both eyes with undilated pupils by means of skin cup electrodes taped over the lower eyelids. Reference electrodes were taped on the ipsilateral temples. Electrophysiologic signals were conventionally amplified, filtered, and digitized. Six hundred artifact-free repetitions were averaged. The response component at the reversal frequency was isolated automatically by digital Fourier transforms and was expressed as a deviation from the age-corrected average. The procedure took approximately 4 minutes. Main Outcome Measures Pattern electroretinogram amplitude (μV) and phase (π rad); response variability (coefficient of variation [CV] = standard deviation [SD] / mean × 100) of amplitude and phase of 2 partial averages that build up the PERG waveform; amplitude (μV) of background noise waveform, obtained by multiplying alternate sweeps by +1 and −1; and interocular asymmetry (CV of amplitude and phase of the PERG of the 2 eyes). Results On average, the PERG has a signal-to-noise ratio of more than 13:1. The CVs of intrasession and intersession variabilities in amplitude and phase are lower than 10% and 2%, respectively, and do not depend on the operator. The CV of interocular asymmetries in amplitude and phase are 9.8±8.8% and 1.5±1.4%, respectively. The PERG amplitude and phase decrease with age. Residuals of linear regression lines have normal distribution, with an SD of 0.1 log units for amplitude and 0.019 log units for phase. Age-corrected confidence limits (P<0.05) are defined as ±2 SD of residuals. Conclusions The PERGLA paradigm yields responses as reliable as the best previously reported using standard protocols. The ease of execution and interpretation of results of PERGLA indicate a potential value for objective screening and follow-up of glaucoma. PMID:14711729

  17. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    PubMed Central

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  18. Differences in duty hours and their relationship with academic parameters between preliminary and categorical general surgery residents.

    PubMed

    Eid, Joseph J; Zendejas, Benjamin; Heller, Stephanie F; Farley, David R

    2015-01-01

    There is the perceived notion that nondesignated preliminary general surgery (P-GS) interns are treated differently (i.e., overworked) than their categorical GS (C-GS) counterparts are treated, or in an effort to prove themselves worthy of a categorical position, nondesignated preliminary residents may self-choose to work more. Empirical evidence examining duty-hour differences between P-GS and C-GS residents is lacking. We retrospectively reviewed 4 academic years (July 2009 to June 2013) of our self-entered duty-hour database. Duty hours were averaged over 4-week periods and then averaged annually for each intern. Duty-hour averages and the percentage of conference attendance between P-GS and C-GS interns were compared. Sensitivity analyses were conducted to evaluate the effect of the 2011 duty-hour regulations, attendance to educational activities, seasonal variations in workload, and the Match Day effect. A total of 70 P-GS and 43 C-GS interns were compared. Duty-hour averages (±standard deviation, range) were 64.4h/wk (±4.6; 45-70) for the P-GS interns and 64.1h/wk (±3.9; 57-72) for the C-GS interns, p = 0.8. Mean (±standard deviation, range) conference attendance was 61% (±17; 33-89) for the P-GS interns and 66% (±18; 44-85) for the C-GS interns (p = 0.13). Duty-hour averages for both the groups positively correlated with conference attendance (r = 0.27, p = <0.001). The P-GS and the C-GS interns worked on average 4.8 hours more a week after the implementation of the 2011 Accreditation Council of Graduate Medical Education duty-hour regulations when compared with before implementation (66.7 ± 4.1 vs 62 ± 3.1, p < 0.0001), with no difference between both the groups. No seasonal variation in duty hours was encountered for either group. For the P-GS interns, no difference in duty hours was observed before or after the Match Day. At our institution, the P-GS and the C-GS interns have equivalent duty-hour periods and similar conference attendance. An expected, a positive correlation was observed between duty hours and conference attendance. Average weekly duty hours increased by almost 5 hours after the implementation of the 2011 duty-hour regulations. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  19. Automatic estimation of dynamics of ionospheric disturbances with 1–15 minute lifetimes as derived from ISTP SB RAS fast chirp-ionosonde data

    NASA Astrophysics Data System (ADS)

    Berngardt, Oleg; Bubnova, Tatyana; Podlesnyi, Aleksey

    2018-03-01

    We propose and test a method of analyzing ionograms of vertical ionospheric sounding, which is based on detecting deviations of the shape of an ionogram from its regular (averaged) shape. We interpret these deviations in terms of reflection from the electron density irregularities at heights corresponding to the effective height. We examine the irregularities thus discovered within the framework of a model of a localized uniformly moving irregularity, and determine their characteristic parameters: effective heights and observed vertical velocities. We analyze selected experimental data for three seasons (spring, winter, autumn) obtained nearby Irkutsk with a fast chirp ionosonde of ISTP SB RAS in 2013-2015. The analysis of six days of observations conducted in these seasons has shown that in the observed vertical drift of the irregularities there are two characteristic distributions: wide velocity distribution with nearly 0 m/s mean and with the standard deviation of ∼250 m/s and narrow distribution with nearly -160 m/s mean. The analysis has demonstrated the effectiveness of the proposed algorithm for the automatic analysis of vertical sounding data with high repetition rate.

  20. Improved thermal conductivity of TiO2-SiO2 hybrid nanofluid in ethylene glycol and water mixture

    NASA Astrophysics Data System (ADS)

    Hamid, K. A.; Azmi, W. H.; Nabil, M. F.; Mamat, R.

    2017-10-01

    The need to study hybrid nanofluid properties such as thermal conductivity has increased recently in order to provide better understanding on nanofluid thermal properties and behaviour. Due to its ability to improve heat transfer compared to conventional heat transfer fluids, nanofluids as a new coolant fluid are widely investigated. This paper presents the thermal conductivity of TiO2-SiO2 nanoparticles dispersed in ethylene glycol (EG)-water. The TiO2-SiO2 hybrid nanofluids is measured for its thermal conductivity using KD2 Pro Thermal Properties Analyzer for concentration ranging from 0.5% to 3.0% and temperature of 30, 50 and 70°C. The results show that the increasing in concentration and temperature lead to enhancement in thermal conductivity at range of concentration studied. The maximum enhancement is found to be 22.1% at concentration 3.0% and temperature 70°C. A new equation is proposed based on the experiment data and found to be in good agreement where the average deviation (AD), standard deviation (SD) and maximum deviation (MD) are 1.67%, 1.66% and 5.13%, respectively.

Top