Introducing the Mean Absolute Deviation "Effect" Size
ERIC Educational Resources Information Center
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
9 CFR 439.20 - Criteria for maintaining accreditation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is...) Variability: The absolute value of the standardized difference between the accredited laboratory's result and... constant, is used in place of the absolute value of the standardized difference to determine the CUSUM-V...
NASA Astrophysics Data System (ADS)
Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar
2013-07-01
The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.
9 CFR 439.10 - Criteria for obtaining accreditation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...
9 CFR 439.10 - Criteria for obtaining accreditation.
Code of Federal Regulations, 2012 CFR
2012-01-01
... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...
9 CFR 439.10 - Criteria for obtaining accreditation.
Code of Federal Regulations, 2013 CFR
2013-01-01
... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...
9 CFR 439.10 - Criteria for obtaining accreditation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...
9 CFR 439.10 - Criteria for obtaining accreditation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...
In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.
Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J
2004-01-01
In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.
Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven
2010-09-01
To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
Nebuya, S; Noshiro, M; Yonemoto, A; Tateno, S; Brown, B H; Smallwood, R H; Milnes, P
2006-05-01
Inter-subject variability has caused the majority of previous electrical impedance tomography (EIT) techniques to focus on the derivation of relative or difference measures of in vivo tissue resistivity. Implicit in these techniques is the requirement for a reference or previously defined data set. This study assesses the accuracy and optimum electrode placement strategy for a recently developed method which estimates an absolute value of organ resistivity without recourse to a reference data set. Since this measurement of tissue resistivity is absolute, in Ohm metres, it should be possible to use EIT measurements for the objective diagnosis of lung diseases such as pulmonary oedema and emphysema. However, the stability and reproducibility of the method have not yet been investigated fully. To investigate these problems, this study used a Sheffield Mk3.5 system which was configured to operate with eight measurement electrodes. As a result of this study, the absolute resistivity measurement was found to be insensitive to the electrode level between 4 and 5 cm above the xiphoid process. The level of the electrode plane was varied between 2 cm and 7 cm above the xiphoid process. Absolute lung resistivity in 18 normal subjects (age 22.6 +/- 4.9, height 169.1 +/- 5.7 cm, weight 60.6 +/- 4.5 kg, body mass index 21.2 +/- 1.6: mean +/- standard deviation) was measured during both normal and deep breathing for 1 min. Three sets of measurements were made over a period of several days on each of nine of the normal male subjects. No significant differences in absolute lung resistivity were found, either during normal tidal breathing between the electrode levels of 4 and 5 cm (9.3 +/- 2.4 Omega m, 9.6 +/- 1.9 Omega m at 4 and 5 cm, respectively: mean +/- standard deviation) or during deep breathing between the electrode levels of 4 and 5 cm (10.9 +/- 2.9 Omega m and 11.1 +/- 2.3 Omega m, respectively: mean +/- standard deviation). However, the differences in absolute lung resistivity between normal and deep tidal breathing at the same electrode level are significant. No significant difference was found in the coefficient of variation between the electrode levels of 4 and 5 cm (9.5 +/- 3.6%, 8.5 +/- 3.2% at 4 and 5 cm, respectively: mean +/- standard deviation in individual subjects). Therefore, the electrode levels of 4 and 5 cm above the xiphoid process showed reasonable reliability in the measurement of absolute lung resistivity both among individuals and over time.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A
2013-01-01
When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.
A standardized model for predicting flap failure using indocyanine green dye
NASA Astrophysics Data System (ADS)
Zimmermann, Terence M.; Moore, Lindsay S.; Warram, Jason M.; Greene, Benjamin J.; Nakhmani, Arie; Korb, Melissa L.; Rosenthal, Eben L.
2016-03-01
Techniques that provide a non-invasive method for evaluation of intraoperative skin flap perfusion are currently available but underutilized. We hypothesize that intraoperative vascular imaging can be used to reliably assess skin flap perfusion and elucidate areas of future necrosis by means of a standardized critical perfusion threshold. Five animal groups (negative controls, n=4; positive controls, n=5; chemotherapy group, n=5; radiation group, n=5; chemoradiation group, n=5) underwent pre-flap treatments two weeks prior to undergoing random pattern dorsal fasciocutaneous flaps with a length to width ratio of 2:1 (3 x 1.5 cm). Flap perfusion was assessed via laser-assisted indocyanine green dye angiography and compared to standard clinical assessment for predictive accuracy of flap necrosis. For estimating flap-failure, clinical prediction achieved a sensitivity of 79.3% and a specificity of 90.5%. When average flap perfusion was more than three standard deviations below the average flap perfusion for the negative control group at the time of the flap procedure (144.3+/-17.05 absolute perfusion units), laser-assisted indocyanine green dye angiography achieved a sensitivity of 81.1% and a specificity of 97.3%. When absolute perfusion units were seven standard deviations below the average flap perfusion for the negative control group, specificity of necrosis prediction was 100%. Quantitative absolute perfusion units can improve specificity for intraoperative prediction of viable tissue. Using this strategy, a positive predictive threshold of flap failure can be standardized for clinical use.
A Robust Interpretation of Teaching Evaluation Ratings
ERIC Educational Resources Information Center
Bi, Henry H.
2018-01-01
There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…
The truly remarkable universality of half a standard deviation: confirmation through another look.
Norman, Geoffrey R; Sloan, Jeff A; Wyrwich, Kathleen W
2004-10-01
In this issue of Expert Review of Pharmacoeconomics and Outcomes Research, Farivar, Liu, and Hays present their findings in 'Another look at the half standard deviation estimate of the minimally important difference in health-related quality of life scores (hereafter referred to as 'Another look') . These researchers have re-examined the May 2003 Medical Care article 'Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation' (hereafter referred to as 'Remarkable') in the hope of supporting their hypothesis that the minimally important difference in health-related quality of life measures is undoubtedly closer to 0.3 standard deviations than 0.5. Nonetheless, despite their extensive wranglings with the exclusion of many articles that we included in our review; the inclusion of articles that we did not include in our review; and the recalculation of effect sizes using the absolute value of the mean differences, in our opinion, the results of the 'Another look' article confirm the same findings in the 'Remarkable' paper.
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
Cruise Summary of WHP P6, A10, I3 and I4 Revisits in 2003
NASA Astrophysics Data System (ADS)
Kawano, T.; Uchida, H.; Schneider, W.; Kumamoto, Y.; Nishina, A.; Aoyama, M.; Murata, A.; Sasaki, K.; Yoshikawa, Y.; Watanabe, S.; Fukasawa, M.
2004-12-01
Japan Agency for Marin-Earth Science and Technology (JAMSTEC) conducted a research cruise to round in the southern hemisphere by R/V Mirai. In this presentation, we introduce an outline of the cruise and data quality obtained during the cruise. The cruise started on Aug. 3, 2003 in Brisbane, Australia and sailed eastward until it reached Fremantle, Australia on Feb. 19, 2004. It contained six legs and legs 1, 2, 4 and 5 were revisits of WOCE Hydrographic Program (WHP) sections P6W, P6E, A10 and I3/I4, respectively. The sections consisted of about 500 hydrographic stations in total. On each station, CTD profiles and up to 36 water samples by 12L Niskin-X bottles were taken from the surface to within 10 m of the bottom. Water samples were analyzed at every station for salinity, dissolved oxygen (DO), and nutrients and at alternate stations for concentration of freons, dissolved inorganic carbon (CT), total alkalinity (AT), pH, and so on. Approximately 17,000 samples were obtained for salinity. The standard seawater was measured repeatedly to estimate the uncertainty caused by the setting and stability of the salinometer. The standard deviation of 699 repeated runs of standard seawater was 0.0002 in salinity. Replicate samples, which are a pair of samples drawn from the same Niskin bottle to different sample bottles, were taken to evaluate the overall uncertainty. The standard deviation of absolute differences of 2,769 replicates was also 0.0002 in salinity. For DO, about 13,400 samples were obtained. The analysis was made by a photometric titration technique. The reproducibility estimated from the absolute standard deviation of 1,625 replicates was about 0.09 umol/kg. CTD temperature was calibrated against a deep ocean standards thermometer (SBE35) which was attached to the CTD using a polynomial expression Tcal = T - (a +b*P + c*t), where Tcal is calibrated temperature, T is CTD temperature, P is CTD pressure and t is time. Calibration coefficients, a, b and c, were determined for each station by minimizing the sum of absolute deviation from SBE35 temperature below 2,000dbar. CTD salinity and DO were fitted to values obtained by sampled water analysis using similar polynomials. These corrections yielded deviations of about 0.0002 K in temperature, 0.0003 in salinity and 0.6 umol/kg in DO. Nutrients analyses were accomplished on 16,000 samples using the reference material of nutrients in seawater (RMNS). To establish the traceability and to get higher quality data, 500 bottles of RMNS from the same lot and 150 sets of RMNSs were used. The precisions of phosphate, nitrate and silicate measurements were 0.18 %, 0.17 % and 0.16 % in terms of median of those at 493 stations, respectively. The nutrients concentrations could be expressed with uncertainties explicitly because of the repeated runs of RMNSs. All the analyses for the CO{2}-system parameters in water columns were finished onboard. Analytical precisions of CT, AT and pH were estimated to be \\sim1.0 umol/kg, \\sim2.0 umol/kg, and \\sim7*10-4 pH unit, respectively. Approximately 6,300 samples were obtained for CFC-11 and CFC-12. The concentrations were determined with an electron capture detector - gas chromatograph (ECD-GC) attached the purge and trapping system. The reproducibility estimated from the absolute standard deviation of 365 replicates was less than 1% with respect to the surface concentrations.
Dual-Polarization Observations of Slowly Varying Solar Emissions from a Mobile X-Band Radar
Gabella, Marco; Leuenberger, Andreas
2017-01-01
The radio noise that comes from the Sun has been reported in literature as a reference signal to check the quality of dual-polarization weather radar receivers for the S-band and C-band. In most cases, the focus was on relative calibration: horizontal and vertical polarizations were evaluated versus the reference signal mainly in terms of standard deviation of the difference. This means that the investigated radar receivers were able to reproduce the slowly varying component of the microwave signal emitted by the Sun. A novel method, aimed at the absolute calibration of dual-polarization receivers, has recently been presented and applied for the C-band. This method requires the antenna beam axis to be pointed towards the center of the Sun for less than a minute. Standard deviations of the difference as low as 0.1 dB have been found for the Swiss radars. As far as the absolute calibration is concerned, the average differences were of the order of −0.6 dB (after noise subtraction). The method has been implemented on a mobile, X-band radar, and this paper presents the successful results that were obtained during the 2016 field campaign in Payerne (Switzerland). Despite a relatively poor Sun-to-Noise ratio, the “small” (~0.4 dB) amplitude of the slowly varying emission was captured and reproduced; the standard deviation of the difference between the radar and the reference was ~0.2 dB. The absolute calibration of the vertical and horizontal receivers was satisfactory. After the noise subtraction and atmospheric correction a, the mean difference was close to 0 dB. PMID:28531164
Dual-Polarization Observations of Slowly Varying Solar Emissions from a Mobile X-Band Radar.
Gabella, Marco; Leuenberger, Andreas
2017-05-22
The radio noise that comes from the Sun has been reported in literature as a reference signal to check the quality of dual-polarization weather radar receivers for the S-band and C-band. In most cases, the focus was on relative calibration: horizontal and vertical polarizations were evaluated versus the reference signal mainly in terms of standard deviation of the difference. This means that the investigated radar receivers were able to reproduce the slowly varying component of the microwave signal emitted by the Sun. A novel method, aimed at the absolute calibration of dual-polarization receivers, has recently been presented and applied for the C-band. This method requires the antenna beam axis to be pointed towards the center of the Sun for less than a minute. Standard deviations of the difference as low as 0.1 dB have been found for the Swiss radars. As far as the absolute calibration is concerned, the average differences were of the order of -0.6 dB (after noise subtraction). The method has been implemented on a mobile, X-band radar, and this paper presents the successful results that were obtained during the 2016 field campaign in Payerne (Switzerland). Despite a relatively poor Sun-to-Noise ratio, the "small" (~0.4 dB) amplitude of the slowly varying emission was captured and reproduced; the standard deviation of the difference between the radar and the reference was ~0.2 dB. The absolute calibration of the vertical and horizontal receivers was satisfactory. After the noise subtraction and atmospheric correction a, the mean difference was close to 0 dB.
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
NASA Astrophysics Data System (ADS)
Karaali, S.; Gökçe, E. Yaz; Bilir, S.; Güçtekin, S. Tunçel
2014-07-01
We present two absolute magnitude calibrations for dwarfs based on colour-magnitude diagrams of Galactic clusters. The combination of the Mg absolute magnitudes of the dwarf fiducial sequences of the clusters M92, M13, M5, NGC 2420, M67, and NGC 6791 with the corresponding metallicities provides absolute magnitude calibration for a given (g - r)0 colour. The calibration is defined in the colour interval 0.25 ≤ (g - r)0 ≤ 1.25 mag and it covers the metallicity interval - 2.15 ≤ [Fe/H] ≤ +0.37 dex. The absolute magnitude residuals obtained by the application of the procedure to another set of Galactic clusters lie in the interval - 0.15 ≤ ΔMg ≤ +0.12 mag. The mean and standard deviation of the residuals are < ΔMg > = - 0.002 and σ = 0.065 mag, respectively. The calibration of the MJ absolute magnitude in terms of metallicity is carried out by using the fiducial sequences of the clusters M92, M13, 47 Tuc, NGC 2158, and NGC 6791. It is defined in the colour interval 0.90 ≤ (V - J)0 ≤ 1.75 mag and it covers the same metallicity interval of the Mg calibration. The absolute magnitude residuals obtained by the application of the procedure to the cluster M5 ([Fe/H] = -1.40 dex) and 46 solar metallicity, - 0.45 ≤ [Fe/H] ≤ +0.35 dex, field stars lie in the interval - 0.29 and + 0.35 mag. However, the range of 87% of them is rather shorter, - 0.20 ≤ ΔMJ ≤ +0.20 mag. The mean and standard deviation of all residuals are < ΔMJ > =0.05 and σ = 0.13 mag, respectively. The derived relations are applicable to stars older than 4 Gyr for the Mg calibration, and older than 2 Gyr for the MJ calibration. The cited limits are the ages of the youngest calibration clusters in the two systems.
Characterizing Accuracy and Precision of Glucose Sensors and Meters
2014-01-01
There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194
1986-03-01
Directly from Sample Bid VI-16 Example 3 VI-16 Determining the Zero Price Qiantity Demanded VI-26 Summary VI -31 CHAPrER VII, THE DETERMINATION OF NED...While the standard deviation and variance are absolute measures of dispersion, a relative measure of dispersion can also be computed. This measure is...refers to the closeness of fit between the estimates obtained from Zli e and the true population value. The only way of being absolutely i: o-.iat the
ESTIMATION OF RADIOACTIVE CALCIUM-45 BY LIQUID SCINTILLATION COUNTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutwak, L.
1959-03-01
A liquid sclntillation counting method is developed for determining radioactive calcium-45 in biological materials. The calcium-45 is extracted, concentrated, and dissolved in absolute ethyl alcohol to which is added 0.4% diphenyloxazol in toluene. Counting efficiency is about 65 percent with standard deviation of the J-57 engin 7.36 percent. (auth)
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
Delmonico, Matthew J; Kostek, Matthew C; Doldo, Neil A; Hand, Brian D; Walsh, Sean; Conway, Joan M; Carignan, Craig R; Roth, Stephen M; Hurley, Ben F
2007-02-01
The alpha-actinin-3 (ACTN3) R577X polymorphism has been associated with muscle power performance in cross-sectional studies. We examined baseline knee extensor concentric peak power (PP) and PP change with approximately 10 weeks of unilateral knee extensor strength training (ST) using air-powered resistance machines in 71 older men (65 [standard deviation = 8] years) and 86 older women (64 [standard deviation = 9] years). At baseline in women, the XX genotype group had an absolute (same resistance) PP that was higher than the RR (p =.005) and RX genotype groups (p =.02). The women XX group also had a relative (70% of one-repetition maximum [1-RM]) PP that was higher than that in the RR (p =.002) and RX groups (p =.008). No differences in baseline absolute or relative PP were observed between ACTN3 genotype groups in men. In men, absolute PP change with ST in the RR (n = 16) group approached a significantly higher value than in the XX group (n = 9; p =.07). In women, relative PP change with ST in the RR group (n = 16) was higher than in the XX group (n = 17; p =.02). The results indicate that the ACTN3 R577X polymorphism influences the response of quadriceps muscle power to ST in older adults.
The Effectiveness of a Rater Training Booklet in Increasing Accuracy of Performance Ratings
1988-04-01
subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...r IS % _. Findings: The absolute deviation scores of each individual’s ratings from the "true score" provided by subject matter experts were analyzed
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
A FORMULA FOR HUMAN PAROTID FLUID COLLECTED WITHOUT EXOGENEOUS STIMULATION.
Parotid fluid was collected from 4,589 systemically healthy males between 17 and 22 years of age. Collection devices were placed with an absolute...secretion of the parotid gland. For all 4,589 subjects from the 8 experiments the mean rate of flow was 0.040 ml./minute with an average standard deviation of
Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.
Jorge, Marco G; Brennand, Tracy A
2017-01-01
Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.
Donald, William A; Leib, Ryan D; O'Brien, Jeremy T; Williams, Evan R
2009-06-08
Solution-phase, half-cell potentials are measured relative to other half-cell potentials, resulting in a thermochemical ladder that is anchored to the standard hydrogen electrode (SHE), which is assigned an arbitrary value of 0 V. A new method for measuring the absolute SHE potential is demonstrated in which gaseous nanodrops containing divalent alkaline-earth or transition-metal ions are reduced by thermally generated electrons. Energies for the reactions 1) M(H(2)O)(24)(2+)(g) + e(-)(g)-->M(H(2)O)(24)(+)(g) and 2) M(H(2)O)(24)(2+)(g) + e(-)(g)-->MOH(H(2)O)(23)(+)(g) + H(g) and the hydrogen atom affinities of MOH(H(2)O)(23)(+)(g) are obtained from the number of water molecules lost through each pathway. From these measurements on clusters containing nine different metal ions and known thermochemical values that include solution hydrolysis energies, an average absolute SHE potential of +4.29 V vs. e(-)(g) (standard deviation of 0.02 V) and a real proton solvation free energy of -265 kcal mol(-1) are obtained. With this method, the absolute SHE potential can be obtained from a one-electron reduction of nanodrops containing divalent ions that are not observed to undergo one-electron reduction in aqueous solution.
Donald, William A.; Leib, Ryan D.; O’Brien, Jeremy T.; Williams, Evan R.
2009-01-01
Solution-phase, half-cell potentials are measured relative to other half-cell potentials, resulting in a thermochemical ladder that is anchored to the standard hydrogen electrode (SHE), which is assigned an arbitrary value of 0 V. A new method for measuring the absolute SHE potential is demonstrated in which gaseous nanodrops containing divalent alkaline-earth or transition-metal ions are reduced by thermally generated electrons. Energies for the reactions 1) M-(H2O)242+(g)+e−(g)→M(H2O)24+(g) and 2) M(H2O)242+(g)+e−(g)→MOH(H2O)23+(g)+H(g) and the hydrogen atom affinities of MOH(H2O)23+(g) are obtained from the number of water molecules lost through each pathway. From these measurements on clusters containing nine different metal ions and known thermochemical values that include solution hydrolysis energies, an average absolute SHE potential of +4.29 V vs. e−(g) (standard deviation of 0.02 V) and a real proton solvation free energy of −265 kcal mol−1 are obtained. With this method, the absolute SHE potential can be obtained from a one-electron reduction of nanodrops containing divalent ions that are not observed to undergo one-electron reduction in aqueous solution. PMID:19440999
Validation of ozone intensities at 10 μm with THz spectrometry
NASA Astrophysics Data System (ADS)
Drouin, Brian J.; Crawford, Timothy J.; Yu, Shanshan
2017-12-01
This manuscript reports an effort to improve the absolute accuracy of ozone intensities in the 10 μm region via a transfer of the precision of the rotational dipole moment onto the infrared measurement. The approach determines the ozone mixing ratio through alternately measuring seven pure rotation ozone lines from 692 to 779 GHz. A multispectrum fitting technique was employed. The results determine the column with absolute accuracy of 1.5% and the intensities of infrared transitions measured at this accuracy reproduce the recommended values to within a standard deviation of 2.8%.
File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering
2013-03-21
33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change
Development of a Dual-Pump CARS System for Measurements in a Supersonic Combusting Free Jet
NASA Technical Reports Server (NTRS)
Magnotti, Gaetano; Cutler, Andrew D.; Danehy, Paul
2012-01-01
This work describes the development of a dual-pump CARS system for simultaneous measurements of temperature and absolute mole fraction of N2, O2 and H2 in a laboratory scale supersonic combusting free jet. Changes to the experimental set-up and the data analysis to improve the quality of the measurements in this turbulent, high-temperature reacting flow are described. The accuracy and precision of the instrument have been determined using data collected in a Hencken burner flame. For temperature above 800 K, errors in absolute mole fraction are within 1.5, 0.5, and 1% of the total composition for N2, O2 and H2, respectively. Estimated standard deviations based on 500 single shots are between 10 and 65 K for the temperature, between 0.5 and 1.7% of the total composition for O2, and between 1.5 and 3.4% for N2. The standard deviation of H2 is 10% of the average measured mole fraction. Results obtained in the jet with and without combustion are illustrated, and the capabilities and limitations of the dual-pump CARS instrument discussed.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
A Simple Model Predicting Individual Weight Change in Humans
Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.
2010-01-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
Pyregov, A V; Ovechkin, A Iu; Petrov, S V
2012-01-01
Results of prospective randomized comparative research of 2 total hemoglobin estimation methods are presented. There were laboratory tests and continuous noninvasive technique with multiwave spectrophotometry on the Masimo Rainbow SET. Research was carried out in two stages. At the 1st stage (gynecology)--67 patients were included and in second stage (obstetrics)--44 patients during and after Cesarean section. The standard deviation of noninvasive total hemoglobin estimation from absolute values (invasive) was 7.2 and 4.1%, an standard deviation in a sample--5.2 and 2.7 % in gynecologic operations and surgical delivery respectively, that confirms lack of reliable indicators differences. The method of continuous noninvasive total hemoglobin estimation with multiwave spectrophotometry on the Masimo Rainbow SET technology can be recommended for use in obstetrics and gynecology.
Nilsonne, A; Sundberg, J; Ternström, S; Askenfelt, A
1988-02-01
A method of measuring the rate of change of fundamental frequency has been developed in an effort to find acoustic voice parameters that could be useful in psychiatric research. A minicomputer program was used to extract seven parameters from the fundamental frequency contour of tape-recorded speech samples: (1) the average rate of change of the fundamental frequency and (2) its standard deviation, (3) the absolute rate of fundamental frequency change, (4) the total reading time, (5) the percent pause time of the total reading time, (6) the mean, and (7) the standard deviation of the fundamental frequency distribution. The method is demonstrated on (a) a material consisting of synthetic speech and (b) voice recordings of depressed patients who were examined during depression and after improvement.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi
2013-07-01
Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and presented separately for clinically different categories, e.g., hypoglycemia, exercise, or night and day. © 2013 Diabetes Technology Society.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
Inter-laboratory validation of bioaccessibility testing for metals.
Henderson, Rayetta G; Verougstraete, Violaine; Anderson, Kim; Arbildua, José J; Brock, Thomas O; Brouwers, Tony; Cappellini, Danielle; Delbeke, Katrien; Herting, Gunilla; Hixon, Greg; Odnevall Wallinder, Inger; Rodriguez, Patricio H; Van Assche, Frank; Wilrich, Peter; Oller, Adriana R
2014-10-01
Bioelution assays are fast, simple alternatives to in vivo testing. In this study, the intra- and inter-laboratory variability in bioaccessibility data generated by bioelution tests were evaluated in synthetic fluids relevant to oral, inhalation, and dermal exposure. Using one defined protocol, five laboratories measured metal release from cobalt oxide, cobalt powder, copper concentrate, Inconel alloy, leaded brass alloy, and nickel sulfate hexahydrate. Standard deviations of repeatability (sr) and reproducibility (sR) were used to evaluate the intra- and inter-laboratory variability, respectively. Examination of the sR:sr ratios demonstrated that, while gastric and lysosomal fluids had reasonably good reproducibility, other fluids did not show as good concordance between laboratories. Relative standard deviation (RSD) analysis showed more favorable reproducibility outcomes for some data sets; overall results varied more between- than within-laboratories. RSD analysis of sr showed good within-laboratory variability for all conditions except some metals in interstitial fluid. In general, these findings indicate that absolute bioaccessibility results in some biological fluids may vary between different laboratories. However, for most applications, measures of relative bioaccessibility are needed, diminishing the requirement for high inter-laboratory reproducibility in absolute metal releases. The inter-laboratory exercise suggests that the degrees of freedom within the protocol need to be addressed. Copyright © 2014 Elsevier Inc. All rights reserved.
Corsica: A Multi-Mission Absolute Calibration Site
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.
2013-09-01
In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale
NASA Astrophysics Data System (ADS)
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan
2018-04-01
This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.
The NIST Detector-Based Luminous Intensity Scale
Cromer, C. L.; Eppeldauer, G.; Hardis, J. E.; Larason, T. C.; Ohno, Y.; Parr, A. C.
1996-01-01
The Système International des Unités (SI) base unit for photometry, the candela, has been realized by using absolute detectors rather than absolute sources. This change in method permits luminous intensity calibrations of standard lamps to be carried out with a relative expanded uncertainty (coverage factor k = 2, and thus a 2 standard deviation estimate) of 0.46 %, almost a factor-of-two improvement. A group of eight reference photometers has been constructed with silicon photodiodes, matched with filters to mimic the spectral luminous efficiency function for photopic vision. The wide dynamic range of the photometers aid in their calibration. The components of the photometers were carefully measured and selected to reduce the sources of error and to provide baseline data for aging studies. Periodic remeasurement of the photometers indicate that a yearly recalibration is required. The design, characterization, calibration, evaluation, and application of the photometers are discussed. PMID:27805119
Zhang, You; Yin, Fang-Fang; Ren, Lei
2015-08-01
Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.
Results of the first North American comparison of absolute gravimeters, NACAG-2010
Schmerge, David; Francis, Olvier; Henton, J.; Ingles, D.; Jones, D.; Kennedy, Jeffrey R.; Krauterbluth, K.; Liard, J.; Newell, D.; Sands, R.; Schiel, J.; Silliker, J.; van Westrum, D.
2012-01-01
The first North American Comparison of absolute gravimeters (NACAG-2010) was hosted by the National Oceanic and Atmospheric Administration at its newly renovated Table Mountain Geophysical Observatory (TMGO) north of Boulder, Colorado, in October 2010. NACAG-2010 and the renovation of TMGO are part of NGS’s GRAV-D project (Gravity for the Redefinition of the American Vertical Datum). Nine absolute gravimeters from three countries participated in the comparison. Before the comparison, the gravimeter operators agreed to a protocol describing the strategy to measure, calculate, and present the results. Nine sites were used to measure the free-fall acceleration of g. Each gravimeter measured the value of g at a subset of three of the sites, for a total set of 27 g-values for the comparison. The absolute gravimeters agree with one another with a standard deviation of 1.6 µGal (1 Gal = 1 cm s-2). The minimum and maximum offsets are -2.8 and 2.7 µGal. This is an excellent agreement and can be attributed to multiple factors, including gravimeters that were in good working order, good operators, a quiet observatory, and a short duration time for the experiment. These results can be used to standardize gravity surveys internationally.
Laser frequency stabilization using a commercial wavelength meter
NASA Astrophysics Data System (ADS)
Couturier, Luc; Nosske, Ingo; Hu, Fachao; Tan, Canzhu; Qiao, Chang; Jiang, Y. H.; Chen, Peng; Weidemüller, Matthias
2018-04-01
We present the characterization of a laser frequency stabilization scheme using a state-of-the-art wavelength meter based on solid Fizeau interferometers. For a frequency-doubled Ti-sapphire laser operated at 461 nm, an absolute Allan deviation below 10-9 with a standard deviation of 1 MHz over 10 h is achieved. Using this laser for cooling and trapping of strontium atoms, the wavemeter scheme provides excellent stability in single-channel operation. Multi-channel operation with a multimode fiber switch results in fluctuations of the atomic fluorescence correlated to residual frequency excursions of the laser. The wavemeter-based frequency stabilization scheme can be applied to a wide range of atoms and molecules for laser spectroscopy, cooling, and trapping.
Absolute Parameters for the F-type Eclipsing Binary BW Aquarii
NASA Astrophysics Data System (ADS)
Maxted, P. F. L.
2018-05-01
BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bellouin, N.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Ma, X.; Myhre, G.; Penner, J. E.; Randles, C. A.; Samset, B.; Schulz, M.; Takemura, T.; Yu, F.; Yu, H.; Zhou, C.
2013-03-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.47 Wm-2 and the inter-model standard deviation is 0.55 Wm-2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm-2, and the standard deviation increases to 1.01 W-2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45 Wm-2 (8%) clear-sky and 0.62 Wm-2 (11%) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Gevaert, Thierry; Depuydt, Tom; Tournel, Koen; Hung, Cecilia; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark
2015-07-15
The purpose of this study was to define an independent verification method based on on-board orthogonal fluoroscopy to determine the geometric accuracy of synchronized gantry-ring (G/R) rotations during dynamic wave arc (DWA) delivery available on the Vero system. A verification method for DWA was developed to calculate O-ring-gantry (G/R) positional information from ball-bearing positions retrieved from fluoroscopic images of a cubic phantom acquired during DWA delivery. Different noncoplanar trajectories were generated in order to investigate the influence of path complexity on delivery accuracy. The G/R positions detected from the fluoroscopy images (DetPositions) were benchmarked against the G/R angulations retrieved from the control points (CP) of the DWA RT plan and the DWA log files recorded by the treatment console during DWA delivery (LogActed). The G/R rotational accuracy was quantified as the mean absolute deviation ± standard deviation. The maximum G/R absolute deviation was calculated as the maximum 3-dimensional distance between the CP and the closest DetPositions. In the CP versus DetPositions comparison, an overall mean G/R deviation of 0.13°/0.16° ± 0.16°/0.16° was obtained, with a maximum G/R deviation of 0.6°/0.2°. For the LogActed versus DetPositions evaluation, the overall mean deviation was 0.08°/0.15° ± 0.10°/0.10° with a maximum G/R of 0.3°/0.4°. The largest decoupled deviations registered for gantry and ring were 0.6° and 0.4° respectively. No directional dependence was observed between clockwise and counterclockwise rotations. Doubling the dose resulted in a double number of detected points around each CP, and an angular deviation reduction in all cases. An independent geometric quality assurance approach was developed for DWA delivery verification and was successfully applied on diverse trajectories. Results showed that the Vero system is capable of following complex G/R trajectories with maximum deviations during DWA below 0.6°. Copyright © 2015 Elsevier Inc. All rights reserved.
A simple method to relate microwave radiances to upper tropospheric humidity
NASA Astrophysics Data System (ADS)
Buehler, S. A.; John, V. O.
2005-01-01
A brightness temperature (BT) transformation method can be applied to microwave data to retrieve Jacobian weighted upper tropospheric relative humidity (UTH) in a broad layer centered roughly between 6 and 8 km altitude. The UTH bias is below 4% RH, and the relative UTH bias below 20%. The UTH standard deviation is between 2 and 6.5% RH in absolute numbers, or between 10 and 27% in relative numbers. The standard deviation is dominated by the regression noise, resulting from vertical structure not accounted for by the simple transformation relation. The UTH standard deviation due to radiometric noise alone has a relative standard deviation of approximately 7% for a radiometric noise level of 1 K. The retrieval performance was shown to be of almost constant quality for all viewing angles and latitudes, except for problems at high latitudes due to surface effects. A validation of AMSU UTH against radiosonde UTH shows reasonable agreement if known systematic differences between AMSU and radiosonde are taken into account. When the method is applied to supersaturation studies, regression noise and radiometric noise could lead to an apparent supersaturation even if there were no supersaturation. For a radiometer noise level of 1 K the drop-off slope of the apparent supersaturation is 0.17% RH-1, for a noise level of 2 K the slope is 0.12% RH-1. The main conclusion from this study is that the BT transformation method is very well suited for microwave data. Its particular strength is in climatological applications where the simplicity and the a priori independence are key advantages.
Dmitrieva, E S; Gel'man, V Ia; Zaĭtseva, K A; Orlov, A M
2009-01-01
Comparative study of acoustic correlates of emotional intonation was conducted on two types of speech material: sensible speech utterances and short meaningless words. The corpus of speech signals of different emotional intonations (happy, angry, frightened, sad and neutral) was created using the actor's method of simulation of emotions. Native Russian 20-70-year-old speakers (both professional actors and non-actors) participated in the study. In the corpus, the following characteristics were analyzed: mean values and standard deviations of the power, fundamental frequency, frequencies of the first and second formants, and utterance duration. Comparison of each emotional intonation with "neutral" utterances showed the greatest deviations of the fundamental frequency and frequencies of the first formant. The direction of these deviations was independent of the semantic content of speech utterance and its duration, age, gender, and being actor or non-actor, though the personal features of the speakers affected the absolute values of these frequencies.
Li, Yongtao; Whitaker, Joshua S; McCarty, Christina L
2012-07-06
A large volume direct aqueous injection method was developed for the analysis of iodinated haloacetic acids in drinking water by using reversed-phase liquid chromatography/electrospray ionization/tandem mass spectrometry in the negative ion mode. Both the external and internal standard calibration methods were studied for the analysis of monoiodoacetic acid, chloroiodoacetic acid, bromoiodoacetic acid, and diiodoacetic acid in drinking water. The use of a divert valve technique for the mobile phase solvent delay, along with isotopically labeled analogs used as internal standards, effectively reduced and compensated for the ionization suppression typically caused by coexisting common inorganic anions. Under the optimized method conditions, the mean absolute and relative recoveries resulting from the replicate fortified deionized water and chlorinated drinking water analyses were 83-107% with a relative standard deviation of 0.7-11.7% and 84-111% with a relative standard deviation of 0.8-12.1%, respectively. The method detection limits resulting from the external and internal standard calibrations, based on seven fortified deionized water replicates, were 0.7-2.3 ng/L and 0.5-1.9 ng/L, respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bernhard, G.; Dahlback, A.; Fioletov, V.; Heikkilä, A.; Johnsen, B.; Koskela, T.; Lakkala, K.; Svendby, T.
2013-11-01
Greatly increased levels of ultraviolet (UV) radiation were observed at thirteen Arctic and sub-Arctic ground stations in the spring of 2011, when the ozone abundance in the Arctic stratosphere dropped to the lowest amounts on record. Measurements of the noontime UV Index (UVI) during the low-ozone episode exceeded the climatological mean by up to 77% at locations in the western Arctic (Alaska, Canada, Greenland) and by up to 161% in Scandinavia. The UVI measured at the end of March at the Scandinavian sites was comparable to that typically observed 15-60 days later in the year when solar elevations are much higher. The cumulative UV dose measured during the period of the ozone anomaly exceeded the climatological mean by more than two standard deviations at 11 sites. Enhancements beyond three standard deviations were observed at seven sites and increases beyond four standard deviations at two sites. At the western sites, the episode occurred in March, when the Sun was still low in the sky, limiting absolute UVI anomalies to less than 0.5 UVI units. At the Scandinavian sites, absolute UVI anomalies ranged between 1.0 and 2.2 UVI units. For example, at Finse, Norway, the noontime UVI on 30 March was 4.7, while the climatological UVI is 2.5. Although a UVI of 4.7 is still considered moderate, UV levels of this amount can lead to sunburn and photokeratitis during outdoor activity when radiation is reflected upward by snow towards the face of a person or animal. At the western sites, UV anomalies can be well explained with ozone anomalies of up to 41% below the climatological mean. At the Scandinavian sites, low ozone can only explain a UVI increase of 50-60%. The remaining enhancement was mainly caused by the absence of clouds during the low-ozone period.
NASA Astrophysics Data System (ADS)
Bernhard, G.; Dahlback, A.; Fioletov, V.; Heikkilä, A.; Johnsen, B.; Koskela, T.; Lakkala, K.; Svendby, T. M.
2013-06-01
Greatly increased levels of ultraviolet (UV) radiation were observed at thirteen Arctic and sub-Arctic ground stations in the spring of 2011 when the ozone abundance in the Arctic stratosphere dropped to the lowest amounts on record. Measurements of the noontime UV Index (UVI) during the low-ozone episode exceeded the climatological mean by up to 77% at locations in the western Arctic (Alaska, Canada, Greenland) and by up to 161% in Scandinavia. The UVI measured at the end of March at the Scandinavian sites was comparable to that typically observed 15-60 days later in the year when solar elevations are much higher. The cumulative UV dose measured during the period of the ozone anomaly exceeded the climatological mean by more than two standard deviations at 11 sites. Enhancements beyond three standard deviations were observed at seven sites and increases beyond four standard deviations at two sites. At the western sites, the episode occurred in March when the Sun was still low in the sky, limiting absolute UVI anomalies to less than 0.5 UVI units. At the Scandinavian sites, absolute UVI anomalies ranged between 1.0 and 2.2 UVI units. For example, at Finse, Norway, the noontime UVI on 30 March was 4.7 while the climatological UVI is 2.5. Although a UVI of 4.7 is still considered moderate, UV levels of this amount can lead to sunburn and photokeratitis during outdoor activity when radiation is reflected upward by snow towards the face of a person or animal. At the western sites, UV anomalies can be well explained with ozone anomalies of up to 41% below the climatological mean. At the Scandinavian sites, low ozone can only explain a UVI increase by 50-60%. The remaining enhancement was mainly caused by the absence of clouds during the low-ozone period.
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
New calibrators for the Cepheid period-luminosity relation
NASA Technical Reports Server (NTRS)
Evans, Nancy R.
1992-01-01
IUE spectra of six Cepheids have been used to determine their absolute magnitudes from the spectral types of their binary companions. The stars observed are U Aql, V659 Cen, Y Lac, S Nor, V350 Sgr, and V636 Sco. The absolute magnitude for V659 Cen is more uncertain than for the others because its reddening is poorly determined and the spectral type is hotter than those of the others. In addition, a reddening law with extra absorption in the 2200 A region is necessary, although this has a negligible effect on the absolute magnitude. For the other Cepheids, and also Eta Aql and W Sgr, the standard deviation from the Feast and Walker period-luminosity-color (PLC) relation is 0.37 mag, confirming the previously estimated internal uncertainty. The absolute magnitudes for S Nor from the binary companion and from cluster membership are very similar. The preliminary PLC zero point is less than 2 sigma (+0.21 mag) different from that of Feast and Walker. The same narrowing of the instability strip at low luminosities found by Fernie is seen.
Modeling the gas-phase thermochemistry of organosulfur compounds.
Vandeputte, Aäron G; Sabbe, Maarten K; Reyniers, Marie-Françoise; Marin, Guy B
2011-06-27
Key to understanding the involvement of organosulfur compounds in a variety of radical chemistries, such as atmospheric chemistry, polymerization, pyrolysis, and so forth, is knowledge of their thermochemical properties. For organosulfur compounds and radicals, thermochemical data are, however, much less well documented than for hydrocarbons. The traditional recourse to the Benson group additivity method offers no solace since only a very limited number of group additivity values (GAVs) is available. In this work, CBS-QB3 calculations augmented with 1D hindered rotor corrections for 122 organosulfur compounds and 45 organosulfur radicals were used to derive 93 Benson group additivity values, 18 ring-strain corrections, 2 non-nearest-neighbor interactions, and 3 resonance corrections for standard enthalpies of formation, standard molar entropies, and heat capacities for organosulfur compounds and organosulfur radicals. The reported GAVs are consistent with previously reported GAVs for hydrocarbons and hydrocarbon radicals and include 77 contributions, among which 26 radical contributions, which, to the best of our knowledge, have not been reported before. The GAVs allow one to estimate the standard enthalpies of formation at 298 K, the standard entropies at 298 K, and standard heat capacities in the temperature range 300-1500 K for a large set of organosulfur compounds, that is, thiols, thioketons, polysulfides, alkylsulfides, thials, dithioates, and cyclic sulfur compounds. For a validation set of 26 organosulfur compounds, the mean absolute deviation between experimental and group additively modeled enthalpies of formation amounts to 1.9 kJ mol(-1). For an additional set of 14 organosulfur compounds, it was shown that the mean absolute deviations between calculated and group additively modeled standard entropies and heat capacities are restricted to 4 and 2 J mol(-1) K(-1), respectively. As an alternative to Benson GAVs, 26 new hydrogen-bond increments are reported, which can also be useful for the prediction of radical thermochemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion
NASA Astrophysics Data System (ADS)
Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar
2014-06-01
In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.
Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael
2016-01-01
Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.
Regional comparison of absolute gravimeters SIM.M.G-K1 key comparison
NASA Astrophysics Data System (ADS)
Newell, D. B.; van Westrum, D.; Francis, O.; Kanney, J.; Liard, J.; Ramirez, A. E.; Lucero, B.; Ellis, B.; Greco, F.; Pistorio, A.; Reudink, R.; Iacovone, D.; Baccaro, F.; Silliker, J.; Wheeler, R. D.; Falk, R.; Ruelke, A.
2017-01-01
Twelve absolute gravimeters were compared during the regional Key Comparison SIM.M.G-K1 of absolute gravimeters. The four gravimeters were from different NMIs and DIs. The comparison was linked to the CCM.G-K2 through EURAMET.M.G-K2 via the DI gravimeter FG5X-216. Overall, the results and uncertainties indicate an excellent agreement among the gravimeters, with a standard deviation of the gravimeters' DoEs better than 1.3 μGal. In the case of the official solution, all the gravimeters are in equivalence well within the declared uncertainties. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Wu, Hanzhong; Zhang, Fumin; Liu, Tingyang; Li, Jianshuang; Qu, Xinghua
2016-10-17
Two-color interferometry is powerful for the correction of the air refractive index especially in the turbulent air over long distance, since the empirical equations could introduce considerable measurement uncertainty if the environmental parameters cannot be measured with sufficient precision. In this paper, we demonstrate a method for absolute distance measurement with high-accuracy correction of air refractive index using two-color dispersive interferometry. The distances corresponding to the two wavelengths can be measured via the spectrograms captured by a CCD camera pair in real time. In the long-term experiment of the correction of air refractive index, the experimental results show a standard deviation of 3.3 × 10-8 for 12-h continuous measurement without the precise knowledge of the environmental conditions, while the variation of the air refractive index is about 2 × 10-6. In the case of absolute distance measurement, the comparison with the fringe counting interferometer shows an agreement within 2.5 μm in 12 m range.
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
Age-specific absolute and relative organ weight distributions for B6C3F1 mice.
Marino, Dale J
2012-01-01
The B6C3F1 mouse is the standard mouse strain used in toxicology studies conducted by the National Cancer Institute (NCI) and the National Toxicology Program (NTP). While numerous reports have been published on growth, survival, and tumor incidence, no overall compilation of organ weight data is available. Importantly, organ weight change is an endpoint used by regulatory agencies to develop toxicity reference values (TRVs) for use in human health risk assessments. Furthermore, physiologically based pharmacokinetic (PBPK) models, which utilize relative organ weights, are increasingly being used to develop TRVs. Therefore, all available absolute and relative organ weight data for untreated control B6C3F1 mice were collected from NCI/NTP studies in order to develop age-specific distributions. Results show that organ weights were collected more frequently in NCI/NTP studies at 2-wk (60 studies), 3-mo (147 studies), and 15-mo (40 studies) intervals than at other intervals, and more frequently from feeding and inhalation than drinking water studies. Liver, right kidney, lung, heart, thymus, and brain weights were most frequently collected. From the collected data, the mean and standard deviation for absolute and relative organ weights were calculated. Results show age-related increases in absolute liver, right kidney, lung, and heart weights and relatively stable brain and right testis weights. The results suggest a general variability trend in absolute organ weights of brain < right testis < right kidney < heart < liver < lung < spleen < thymus. This report describes the results of this effort.
Gravimetric method for the determination of diclofenac in pharmaceutical preparations.
Tubino, Matthieu; De Souza, Rafael L
2005-01-01
A gravimetric method for the determination of diclofenac in pharmaceutical preparations was developed. Diclofenac is precipitated from aqueous solution with copper(II) acetate in pH 5.3 (acetic acid/acetate buffer). Sample aliquots had approximately the same quantity of the drug content in tablets (50 mg) or in ampules (75 mg). The observed standard deviation was about +/- 2 mg; therefore, the relative standard deviation (RSD) was approximately 4% for tablet and 3% for ampule preparations. The results were compared with those obtained with the liquid chromatography method recommended in the United States Pharmacopoeia using the statistical Student's t-test. Complete agreement was observed. It is possible to obtain more precise results using higher aliquots, for example 200 mg, in which case the RSD falls to 1%. This gravimetric method, contrary to what is expected for this kind of procedure, is relatively fast and simple to perform. The main advantage is the absolute character of the gravimetric analysis.
NASA Technical Reports Server (NTRS)
Maximenko, Nikolai A.
2003-01-01
Mean absolute sea level reflects the deviation of the Ocean surface from geoid due to the ocean currents and is an important characteristic of the dynamical state of the ocean. Values of its spatial variations (order of 1 m) are generally much smaller than deviations of the geoid shape from ellipsoid (order of 100 m) that makes the derivation of the absolute mean sea level a difficult task for gravity and satellite altimetry observations. Technique used by Niiler et al. for computation of the absolute mean sea level in the Kuroshio Extension was then developed into more general method and applied by Niiler et al. (2003b) to the global Ocean. The method is based on the consideration of balance of horizontal momentum.
Heavy Ozone Enrichments from ATMOS Infrared Solar Spectra
NASA Technical Reports Server (NTRS)
Irion, F. W.; Gunson, M. R.; Rinsland, C. P.; Yung, Y. L.; Abrams, M. C.; Chang, A. Y.; Goldman, A.
1996-01-01
Vertical enrichment profiles of stratospheric O-16O-16O-18 and O-16O-18O-16 (hereafter referred to as (668)O3 and (686)O3 respectively) have been derived from space-based solar occultation spectra recorded at 0.01 cm(exp-1) resolution by the ATMOS (Atmospheric Trace MOlecule Spectroscopy) Fourier transform infrared (FTIR) spectrometer. The observations, made during the Spacelab 3 and ATLAS-1, -2, and -3 shuttle missions, cover polar, mid-latitude and tropical regions between 26 to 2.6 mb inclusive (approximately 25 to 41 km). Average enrichments, weighted by molecular (48)O3 density, of (15 +/- 6)% were found for (668)O3 and (10 +/- 7)% for (686)O3. Defining the mixing ratio of (50)O3 as the sum of those for (668)O3 and (686)O3, an enrichment of (13 plus or minus 5)% was found for (50)O3 (1 sigma standard deviation). No latitudinal or vertical gradients were found outside this standard deviation. From a series of ground-based measurements by the ATMOS instrument at Table Mountain, California (34.4 deg N), an average total column (668)O3 enrichment of (17 +/- 4)% (1 sigma standard deviation) was determined, with no significant seasonal variation discernable. Possible biases in the spectral intensities that affect the determination of absolute enrichments are discussed.
Allan deviation analysis of financial return series
NASA Astrophysics Data System (ADS)
Hernández-Pérez, R.
2012-05-01
We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.
NASA Astrophysics Data System (ADS)
Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.
2017-01-01
Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.
NASA Astrophysics Data System (ADS)
Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid
2017-12-01
Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.
Rectangularization of the survival curve in The Netherlands, 1950-1992.
Nusselder, W J; Mackenbach, J P
1996-12-01
In this article we determine whether rectangularization of the survival curve occurred in the Netherlands in the period 1950-1992. Rectangularization is defined as a trend toward a more rectangular shape of the survival curve due to increased survival and concentration of deaths around the mean age at death. We distinguish between absolute and relative rectangularization, depending on whether an increase in life expectancy is accompanied by concentration of deaths into a smaller age interval or into a smaller proportion of total life expectancy. We used measures of variability based on Keyfitz' H and the standard deviation, both life table-based. Our results show that absolute and relative rectangularization of the entire survival curve occurred in both sexes and over the complete period (except for the years 1955-1959 and 1965-1969 in men). At older ages, results differ between sexes, periods, and an absolute versus a relative definition of rectangularization. Above age 60 1/2, relative rectangularization occurred in women over the complete period and in men since 1975-1979 only, whereas absolute rectangularization occurred in both sexes since the period of 1980-1984. The implications of the recent rectangularization at older ages for achieving compression of morbidity are discussed.
ERIC Educational Resources Information Center
Helmreich, James E.; Krog, K. Peter
2018-01-01
We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…
SNPP VIIRS RSB Earth View Reflectance Uncertainty
NASA Technical Reports Server (NTRS)
Lei, Ning; Twedt, Kevin; McIntire, Jeff; Xiong, Xiaoxiong
2017-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite uses its 14 reflective solar bands to passively collect solar radiant energy reflected off the Earth. The Level 1 product is the geolocated and radiometrically calibrated top-of- the-atmosphere solar reflectance. The absolute radiometric uncertainty associated with this product includes contributions from the noise associated with measured detector digital counts and the radiometric calibration bias. Here, we provide a detailed algorithm for calculating the estimated standard deviation of the retrieved top-of-the-atmosphere spectral solar radiation reflectance.
High-speed rangefinder for industrial application
NASA Astrophysics Data System (ADS)
Cavedo, Federico; Pesatori, Alessandro; Norgia, Michele
2016-06-01
The proposed work aims to improve one of the most used telemetry techniques to make absolute measurements of distance: the time of flight telemetry. The main limitation of the low-cost implementation of this technique is the low accuracy (some mm) and measurement rate (few measurements per second). In order to overcome these limits we modified the typical setup of this rangefinder exploiting low-cost telecommunication transceivers and radiofrequency synthesizers. The obtained performances are very encouraging, reaching a standard deviation of a few micrometers over the range of some meters.
Hyperfine-resolved transition frequency list of fundamental vibration bands of H35Cl and H37Cl
NASA Astrophysics Data System (ADS)
Iwakuni, Kana; Sera, Hideyuki; Abe, Masashi; Sasada, Hiroyuki
2014-12-01
Sub-Doppler resolution spectroscopy of the fundamental vibration bands of H35Cl and H37Cl has been carried out from 87.1 to 89.9 THz. We have determined the absolute transition frequencies of the hyperfine-resolved R(0) to R(4) transitions with a typical uncertainty of 10 kHz. We have also yielded six molecular constants for each isotopomer in the vibrational excited state, which reproduce the determined frequencies with a standard deviation of about 10 kHz.
Predicting Functional Capacity From Measures of Muscle Mass in Postmenopausal Women.
Orsatti, Fábio Lera; Nunes, Paulo Ricardo Prado; Souza, Aletéia de Paula; Martins, Fernanda Maria; de Oliveira, Anselmo Alves; Nomelini, Rosekeila Simões; Michelin, Márcia Antoniazi; Murta, Eddie Fernando Cândido
2017-06-01
Menopause increases body fat and decreases muscle mass and strength, which contribute to sarcopenia. The amount of appendicular muscle mass has been frequently used to diagnose sarcopenia. Different measures of appendicular muscle mass have been proposed. However, no studies have compared the most salient measure (appendicular muscle mass corrected by body fat) of the appendicular muscle mass to physical function in postmenopausal women. To examine the association of 3 different measurements of appendicular muscle mass (absolute, corrected by stature, and corrected by body fat) with physical function in postmenopausal women. Cross-sectional descriptive study. Outpatient geriatric and gynecological clinic. Forty-eight postmenopausal women with a mean age (standard deviation [SD]) of 62.1 ± 8.2 years, with mean (SD) length of menopause of 15.7 ± 9.8 years and mean (SD) body fat of 43.6% ± 9.8%. Not applicable. Appendicular muscle mass measure was measured with dual-energy x-ray absorptiometry. Physical function was measured by a functional capacity questionnaire, a short physical performance battery, and a 6 minute-walk test. Muscle quality (leg extensor strength to lower-body mineral-free lean mass ratio) and sum of z scores (sum of each physical function tests z score) were performed to provide a global index of physical function. The regression analysis showed that appendicular muscle mass corrected by body fat was the strongest predictor of physical function. Each increase in the standard deviation of appendicular muscle mass corrected by body fat was associated with a mean sum of z score increase of 59% (standard deviation), whereas each increase in absolute appendicular muscle mass and appendicular muscle mass corrected by stature were associated with a mean sum of z scores decrease of 23% and 36%, respectively. Muscle quality was associated with appendicular muscle mass corrected by body fat. These findings indicate that appendicular muscle mass corrected by body fat is a better predictor of physical function than the other measures of appendicular muscle mass in postmenopausal women. I. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wen, N; Lu, S; Qin, Y
Purpose: To evaluate the dosimetric uncertainty associated with Gafchromic (EBT3) films and establish an absolute dosimetry protocol for Stereotactic Radiosurgery (SRS) and Stereotactic Body Radiotherapy (SBRT). Methods: EBT3 films were irradiated at each of seven different dose levels between 1 and 15 Gy with open fields, and standard deviations of dose maps were calculated at each color channel for evaluation. A scanner non-uniform response correction map was built by registering and comparing film doses to the reference diode array-based dose map delivered with the same doses. To determine the temporal dependence of EBT3 films, the average correction factors of differentmore » dose levels as a function of time were evaluated up to four days after irradiation. An integrated film dosimetry protocol was developed for dose calibration, calibration curve fitting, dose mapping, and profile/gamma analysis. Patient specific quality assurance (PSQA) was performed for 93 SRS/SBRT treatment plans. Results: The scanner response varied within 1% for the field sizes less than 5 × 5 cm{sup 2}, and up to 5% for the field sizes of 10 × 10 cm{sup 2}. The scanner correction method was able to remove visually evident, irregular detector responses found for larger field sizes. The dose response of the film changed rapidly (∼10%) in the first two hours and plateaued afterwards, ∼3% change between 2 and 24 hours. The mean uncertainties (mean of the standard deviations) were <0.5% over the dose range 1∼15Gy for all color channels for the OD response curves. The percentage of points passing the 3%/1mm gamma criteria based on absolute dose analysis, averaged over all tests, was 95.0 ± 4.2. Conclusion: We have developed an absolute film dose dosimetry protocol using EBT3 films. The overall uncertainty has been established to be approximately 1% for SRS and SBRT PSQA. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, You; Yin, Fang-Fang; Ren, Lei, E-mail: lei.ren@duke.edu
2015-08-15
Purpose: Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. Methods: Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated onmore » the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the “gold-standard” on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔD{sub min}), maximum dose (ΔD{sub max}), and mean dose (ΔD{sub mean}), and the absolute deviations of prescription dose coverage (ΔV{sub 100%}) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. Results: Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔD{sub min}, ΔD{sub max}, ΔD{sub mean}, and ΔV{sub 100%} of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). Conclusions: MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.« less
Fidell, Sanford; Tabachnick, Barbara; Mestre, Vincent; Fidell, Linda
2013-11-01
Assessment of aircraft noise-induced sleep disturbance is problematic for several reasons. Current assessment methods are based on sparse evidence and limited understandings; predictions of awakening prevalence rates based on indoor absolute sound exposure levels (SELs) fail to account for appreciable amounts of variance in dosage-response relationships and are not freely generalizable from airport to airport; and predicted awakening rates do not differ significantly from zero over a wide range of SELs. Even in conjunction with additional predictors, such as time of night and assumed individual differences in "sensitivity to awakening," nominally SEL-based predictions of awakening rates remain of limited utility and are easily misapplied and misinterpreted. Probabilities of awakening are more closely related to SELs scaled in units of standard deviates of local distributions of aircraft SELs, than to absolute sound levels. Self-selection of residential populations for tolerance of nighttime noise and habituation to airport noise environments offer more parsimonious and useful explanations for differences in awakening rates at disparate airports than assumed individual differences in sensitivity to awakening.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Who's biased? A meta-analysis of buyer-seller differences in the pricing of lotteries.
Yechiam, Eldad; Ashby, Nathaniel J S; Pachur, Thorsten
2017-05-01
A large body of empirical research has examined the impact of trading perspective on pricing of consumer products, with the typical finding being that selling prices exceed buying prices (i.e., the endowment effect). Using a meta-analytic approach, we examine to what extent the endowment effect also emerges in the pricing of monetary lotteries. As monetary lotteries have a clearly defined normative value, we also assess whether one trading perspective is more biased than the other. We consider several indicators of bias: absolute deviation from expected values, rank correlation with expected values, overall variance, and per-unit variance. The meta-analysis, which includes 35 articles, indicates that selling prices considerably exceed buying prices (Cohen's d = 0.58). Importantly, we also find that selling prices deviate less from the lotteries' expected values than buying prices, both in absolute and in relative terms. Selling prices also exhibit lower variance per unit. Hierarchical Bayesian modeling with cumulative prospect theory indicates that buyers have lower probability sensitivity and a more pronounced response bias. The finding that selling prices are more in line with normative standards than buying prices challenges the prominent account whereby sellers' valuations are upward biased due to loss aversion, and supports alternative theoretical accounts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A Fully Sensorized Cooperative Robotic System for Surgical Interventions
Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.
2012-01-01
In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551
Self-mixing instrument for simultaneous distance and speed measurement
NASA Astrophysics Data System (ADS)
Norgia, Michele; Melchionni, Dario; Pesatori, Alessandro
2017-12-01
A novel instrument based on Self-mixing interferometry is proposed to simultaneously measure absolute distance and velocity. The measurement method is designed for working directly on each kind of surface, in industrial environment, overcoming also problems due to speckle pattern effect. The laser pump current is modulated at quite high frequency (40 kHz) and the estimation of the induced fringes frequency allows an almost instantaneous measurement (measurement time equal to 25 μs). A real time digital elaboration processes the measurement data and discards unreliable measurements. The simultaneous measurement reaches a relative standard deviation of about 4·10-4 in absolute distance, and 5·10-3 in velocity measurement. Three different laser sources are tested and compared. The instrument shows good performances also in harsh environment, for example measuring the movement of an opaque iron tube rotating under a running water flow.
Position of the station Borowiec in the Doppler observation campaign WEDOC 80
NASA Astrophysics Data System (ADS)
Pachelski, W.
The position of the Doppler antenna located at the Borowiec Observatory, Poland, is analyzed based on data gathered during the WEDOC 80 study and an earlier study in 1977. Among other findings, it is determined that biases of the reference system origin can be partially eliminated by transforming absolute coordinates of two or more stations into station-to-station vector components, and by determining the vector length while the system scale remains affected by broadcast ephemerides. The standard deviations of absolute coordinates are shown to represent only the internal accuracy of the solution, and are found to depend on the geometrical configuration between the station position and the satellite passes. It is shown that significant correlations between station coordinates in translocation or multilocation are due to the poor conditioning of design matrices with respect to the origin and orientation of the reference system.
Interactive Visual Least Absolutes Method: Comparison with the Least Squares and the Median Methods
ERIC Educational Resources Information Center
Kim, Myung-Hoon; Kim, Michelle S.
2016-01-01
A visual regression analysis using the least absolutes method (LAB) was developed, utilizing an interactive approach of visually minimizing the sum of the absolute deviations (SAB) using a bar graph in Excel; the results agree very well with those obtained from nonvisual LAB using a numerical Solver in Excel. These LAB results were compared with…
Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-07-01
To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.
Filling the voids in the SRTM elevation model — A TIN-based delta surface approach
NASA Astrophysics Data System (ADS)
Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas
The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.
Concistrè, A; Grillo, A; La Torre, G; Carretta, R; Fabris, B; Petramala, L; Marinelli, C; Rebellato, A; Fallo, F; Letizia, C
2018-04-01
Primary hyperparathyroidism is associated with a cluster of cardiovascular manifestations, including hypertension, leading to increased cardiovascular risk. The aim of our study was to investigate the ambulatory blood pressure monitoring-derived short-term blood pressure variability in patients with primary hyperparathyroidism, in comparison with patients with essential hypertension and normotensive controls. Twenty-five patients with primary hyperparathyroidism (7 normotensive,18 hypertensive) underwent ambulatory blood pressure monitoring at diagnosis, and fifteen out of them were re-evaluated after parathyroidectomy. Short-term-blood pressure variability was derived from ambulatory blood pressure monitoring and calculated as the following: 1) Standard Deviation of 24-h, day-time and night-time-BP; 2) the average of day-time and night-time-Standard Deviation, weighted for the duration of the day and night periods (24-h "weighted" Standard Deviation of BP); 3) average real variability, i.e., the average of the absolute differences between all consecutive BP measurements. Baseline data of normotensive and essential hypertension patients were matched for age, sex, BMI and 24-h ambulatory blood pressure monitoring values with normotensive and hypertensive-primary hyperparathyroidism patients, respectively. Normotensive-primary hyperparathyroidism patients showed a 24-h weighted Standard Deviation (P < 0.01) and average real variability (P < 0.05) of systolic blood pressure higher than that of 12 normotensive controls. 24-h average real variability of systolic BP, as well as serum calcium and parathyroid hormone levels, were reduced in operated patients (P < 0.001). A positive correlation of serum calcium and parathyroid hormone with 24-h-average real variability of systolic BP was observed in the entire primary hyperparathyroidism patients group (P = 0.04, P = 0.02; respectively). Systolic blood pressure variability is increased in normotensive patients with primary hyperparathyroidism and is reduced by parathyroidectomy, and may potentially represent an additional cardiovascular risk factor in this disease.
Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek
2011-01-01
In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards. PMID:22346683
Hecker, Christoph; Hook, Simon; van der Meijde, Mark; Bakker, Wim; van der Werff, Harald; Wilbrink, Henk; van Ruitenbeek, Frank; de Smeth, Boudewijn; van der Meer, Freek
2011-01-01
In this article we describe a new instrumental setup at the University of Twente Faculty ITC with an optimized processing chain to measure absolute directional-hemispherical reflectance values of typical earth science samples in the 2.5 to 16 μm range. A Bruker Vertex 70 FTIR spectrometer was chosen as the base instrument. It was modified with an external integrating sphere with a 30 mm sampling port to allow measuring large, inhomogeneous samples and quantitatively compare the laboratory results to airborne and spaceborne remote sensing data. During the processing to directional-hemispherical reflectance values, a background radiation subtraction is performed, removing the effect of radiance not reflected from the sample itself on the detector. This provides more accurate reflectance values for low-reflecting samples. Repeat measurements taken over a 20 month period on a quartz sand standard show that the repeatability of the system is very high, with a standard deviation ranging between 0.001 and 0.006 reflectance units depending on wavelength. This high level of repeatability is achieved even after replacing optical components, re-aligning mirrors and placement of sample port reducers. Absolute reflectance values of measurements taken by the instrument here presented compare very favorably to measurements of other leading laboratories taken on identical sample standards.
Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B
2017-10-01
In the perioperative period, anesthesiologists and postanesthesia care unit (PACU) nurses routinely prepare and administer small-volume IV injections, yet the accuracy of delivered medication volumes in this setting has not been described. In this ex vivo study, we sought to characterize the degree to which small-volume injections (≤0.5 mL) deviated from the intended injection volumes among a group of pediatric anesthesiologists and pediatric postanesthesia care unit (PACU) nurses. We hypothesized that as the intended injection volumes decreased, the deviation from those intended injection volumes would increase. Ten attending pediatric anesthesiologists and 10 pediatric PACU nurses each performed a series of 10 injections into a simulated patient IV setup. Practitioners used separate 1-mL tuberculin syringes with removable 18-gauge needles (Becton-Dickinson & Company, Franklin Lakes, NJ) to aspirate 5 different volumes (0.025, 0.05, 0.1, 0.25, and 0.5 mL) of 0.25 mM Lucifer Yellow (LY) fluorescent dye constituted in saline (Sigma Aldrich, St. Louis, MO) from a rubber-stoppered vial. Each participant then injected the specified volume of LY fluorescent dye via a 3-way stopcock into IV tubing with free-flowing 0.9% sodium chloride (10 mL/min). The injected volume of LY fluorescent dye and 0.9% sodium chloride then drained into a collection vial for laboratory analysis. Microplate fluorescence wavelength detection (Infinite M1000; Tecan, Mannedorf, Switzerland) was used to measure the fluorescence of the collected fluid. Administered injection volumes were calculated based on the fluorescence of the collected fluid using a calibration curve of known LY volumes and associated fluorescence.To determine whether deviation of the administered volumes from the intended injection volumes increased at lower injection volumes, we compared the proportional injection volume error (loge [administered volume/intended volume]) for each of the 5 injection volumes using a linear regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P < .0018). The exploratory analysis revealed no significant difference in the standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drzymala, R; Alvarez, P; Bednarz, G
2015-06-15
Purpose: The purpose of this multi-institutional study was to compare two new gamma stereotactic radiosurgery (GSRS) dosimetry protocols to existing calibration methods. The ultimate goal was to guide AAPM Task Group 178 in recommending a standard GSRS dosimetry protocol. Methods: Nine centers (ten GSRS units) participated in the study. Each institution made eight sets of dose rate measurements: six with two different ionization chambers in three different 160mm-diameter spherical phantoms (ABS plastic, Solid Water and liquid water), and two using the same ionization chambers with a custom in-air positioning jig. Absolute dose rates were calculated using a newly proposed formalismmore » by the IAEA working group for small and non-standard radiation fields and with a new air-kerma based protocol. The new IAEA protocol requires an in-water ionization chamber calibration and uses previously reported Monte-Carlo generated factors to account for the material composition of the phantom, the type of ionization chamber, and the unique GSRS beam configuration. Results obtained with the new dose calibration protocols were compared to dose rates determined by the AAPM TG-21 and TG-51 protocols, with TG-21 considered as the standard. Results: Averaged over all institutions, ionization chambers and phantoms, the mean dose rate determined with the new IAEA protocol relative to that determined with TG-21 in the ABS phantom was 1.000 with a standard deviation of 0.008. For TG-51, the average ratio was 0.991 with a standard deviation of 0.013, and for the new in-air formalism it was 1.008 with a standard deviation of 0.012. Conclusion: Average results with both of the new protocols agreed with TG-21 to within one standard deviation. TG-51, which does not take into account the unique GSRS beam configuration or phantom material, was not expected to perform as well as the new protocols. The new IAEA protocol showed remarkably good agreement with TG-21. Conflict of Interests: Paula Petti, Josef Novotny, Gennady Neyman and Steve Goetsch are consultants for Elekta Instrument A/B; Elekta Instrument AB, PTW Freiburg GmbH, Standard Imaging, Inc., and The Phantom Laboratory, Inc. loaned equipment for use in these experiments; The University of Wisconsin Accredited Dosimetry Calibration Laboratory provided calibration services.« less
NASA Astrophysics Data System (ADS)
Reda, Ibrahim; Zeng, Jinan; Scheuch, Jonathan; Hanssen, Leonard; Wilthan, Boris; Myers, Daryl; Stoffel, Tom
2012-03-01
This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180° view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U95) of ±3.96 W m-2 with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m2 lower than that measured by the two pyrgeometers that are traceable to WISG, with a standard deviation of ±0.7 W m-2. These results suggest that the ACP design might be used for addressing the need to improve the international reference for broadband outdoor longwave irradiance measurements.
Health status convergence at the local level: empirical evidence from Austria
2011-01-01
Introduction Health is an important dimension of welfare comparisons across individuals, regions and states. Particularly from a long-term perspective, within-country convergence of the health status has rarely been investigated by applying methods well established in other scientific fields. In the following paper we study the relation between initial levels of the health status and its improvement at the local community level in Austria in the time period 1969-2004. Methods We use age standardized mortality rates from 2381 Austrian communities as an indicator for the health status and analyze the convergence/divergence of overall mortality for (i) the whole population, (ii) females, (iii) males and (iv) the gender mortality gap. Convergence/Divergence is studied by applying different concepts of cross-regional inequality (weighted standard deviation, coefficient of variation, Theil-Coefficient of inequality). Various econometric techniques (weighted OLS, Quantile Regression, Kendall's Rank Concordance) are used to test for absolute and conditional beta-convergence in mortality. Results Regarding sigma-convergence, we find rather mixed results. While the weighted standard deviation indicates an increase in equality for all four variables, the picture appears less clear when correcting for the decreasing mean in the distribution. However, we find highly significant coefficients for absolute and conditional beta-convergence between the periods. While these results are confirmed by several robustness tests, we also find evidence for the existence of convergence clubs. Conclusions The highly significant beta-convergence across communities might be caused by (i) the efforts to harmonize and centralize the health policy at the federal level in Austria since the 1970s, (ii) the diminishing returns of the input factors in the health production function, which might lead to convergence, as the general conditions (e.g. income, education etc.) improve over time, and (iii) the mobility of people across regions, as people tend to move to regions/communities which exhibit more favorable living conditions. JEL classification: I10, I12, I18 PMID:21864364
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164
Uncertainty Analysis of Downscaled CMIP5 Precipitation Data for Louisiana, USA
NASA Astrophysics Data System (ADS)
Sumi, S. J.; Tamanna, M.; Chivoiu, B.; Habib, E. H.
2014-12-01
The downscaled CMIP3 and CMIP5 Climate and Hydrology Projections dataset contains fine spatial resolution translations of climate projections over the contiguous United States developed using two downscaling techniques (monthly Bias Correction Spatial Disaggregation (BCSD) and daily Bias Correction Constructed Analogs (BCCA)). The objective of this study is to assess the uncertainty of the CMIP5 downscaled general circulation models (GCM). We performed an analysis of the daily, monthly, seasonal and annual variability of precipitation downloaded from the Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections website for the state of Louisiana, USA at 0.125° x 0.125° resolution. A data set of daily gridded observations of precipitation of a rectangular boundary covering Louisiana is used to assess the validity of 21 downscaled GCMs for the 1950-1999 period. The following statistics are computed using the CMIP5 observed dataset with respect to the 21 models: the correlation coefficient, the bias, the normalized bias, the mean absolute error (MAE), the mean absolute percentage error (MAPE), and the root mean square error (RMSE). A measure of variability simulated by each model is computed as the ratio of its standard deviation, in both space and time, to the corresponding standard deviation of the observation. The correlation and MAPE statistics are also computed for each of the nine climate divisions of Louisiana. Some of the patterns that we observed are: 1) Average annual precipitation rate shows similar spatial distribution for all the models within a range of 3.27 to 4.75 mm/day from Northwest to Southeast. 2) Standard deviation of summer (JJA) precipitation (mm/day) for the models maintains lower value than the observation whereas they have similar spatial patterns and range of values in winter (NDJ). 3) Correlation coefficients of annual precipitation of models against observation have a range of -0.48 to 0.36 with variable spatial distribution by model. 4) Most of the models show negative correlation coefficients in summer and positive in winter. 5) MAE shows similar spatial distribution for all the models within a range of 5.20 to 7.43 mm/day from Northwest to Southeast of Louisiana. 6) Highest values of correlation coefficients are found at seasonal scale within a range of 0.36 to 0.46.
Mammographic Density Phenotypes and Risk of Breast Cancer: A Meta-analysis
Graff, Rebecca E.; Ursin, Giske; dos Santos Silva, Isabel; McCormack, Valerie; Baglietto, Laura; Vachon, Celine; Bakker, Marije F.; Giles, Graham G.; Chia, Kee Seng; Czene, Kamila; Eriksson, Louise; Hall, Per; Hartman, Mikael; Warren, Ruth M. L.; Hislop, Greg; Chiarelli, Anna M.; Hopper, John L.; Krishnan, Kavitha; Li, Jingmei; Li, Qing; Pagano, Ian; Rosner, Bernard A.; Wong, Chia Siong; Scott, Christopher; Stone, Jennifer; Maskarinec, Gertraud; Boyd, Norman F.; van Gils, Carla H.
2014-01-01
Background Fibroglandular breast tissue appears dense on mammogram, whereas fat appears nondense. It is unclear whether absolute or percentage dense area more strongly predicts breast cancer risk and whether absolute nondense area is independently associated with risk. Methods We conducted a meta-analysis of 13 case–control studies providing results from logistic regressions for associations between one standard deviation (SD) increments in mammographic density phenotypes and breast cancer risk. We used random-effects models to calculate pooled odds ratios and 95% confidence intervals (CIs). All tests were two-sided with P less than .05 considered to be statistically significant. Results Among premenopausal women (n = 1776 case patients; n = 2834 control subjects), summary odds ratios were 1.37 (95% CI = 1.29 to 1.47) for absolute dense area, 0.78 (95% CI = 0.71 to 0.86) for absolute nondense area, and 1.52 (95% CI = 1.39 to 1.66) for percentage dense area when pooling estimates adjusted for age, body mass index, and parity. Corresponding odds ratios among postmenopausal women (n = 6643 case patients; n = 11187 control subjects) were 1.38 (95% CI = 1.31 to 1.44), 0.79 (95% CI = 0.73 to 0.85), and 1.53 (95% CI = 1.44 to 1.64). After additional adjustment for absolute dense area, associations between absolute nondense area and breast cancer became attenuated or null in several studies and summary odds ratios became 0.82 (95% CI = 0.71 to 0.94; P heterogeneity = .02) for premenopausal and 0.85 (95% CI = 0.75 to 0.96; P heterogeneity < .01) for postmenopausal women. Conclusions The results suggest that percentage dense area is a stronger breast cancer risk factor than absolute dense area. Absolute nondense area was inversely associated with breast cancer risk, but it is unclear whether the association is independent of absolute dense area. PMID:24816206
Absolute color scale for improved diagnostics with wavefront error mapping.
Smolek, Michael K; Klyce, Stephen D
2007-11-01
Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.
McCormick, Matthew M.; Madsen, Ernest L.; Deaner, Meagan E.; Varghese, Tomy
2011-01-01
Absolute backscatter coefficients in tissue-mimicking phantoms were experimentally determined in the 5–50 MHz frequency range using a broadband technique. A focused broadband transducer from a commercial research system, the VisualSonics Vevo 770, was used with two tissue-mimicking phantoms. The phantoms differed regarding the thin layers covering their surfaces to prevent desiccation and regarding glass bead concentrations and diameter distributions. Ultrasound scanning of these phantoms was performed through the thin layer. To avoid signal saturation, the power spectra obtained from the backscattered radio frequency signals were calibrated by using the signal from a liquid planar reflector, a water-brominated hydrocarbon interface with acoustic impedance close to that of water. Experimental values of absolute backscatter coefficients were compared with those predicted by the Faran scattering model over the frequency range 5–50 MHz. The mean percent difference and standard deviation was 54% ± 45% for the phantom with a mean glass bead diameter of 5.40 μm and was 47% ± 28% for the phantom with 5.16 μm mean diameter beads. PMID:21877789
The impact of water temperature on the measurement of absolute dose
NASA Astrophysics Data System (ADS)
Islam, Naveed Mehdi
To standardize reference dosimetry in radiation therapy, Task Group 51 (TG 51) of American Association of Physicist's in Medicine (AAPM) recommends that dose calibration measurements be made in a water tank at a depth of 10 cm and at a reference geometry. Methodologies are provided for calculating various correction factors to be applied in calculating the absolute dose. However the protocol does not specify the water temperature to be used. In practice, the temperature of water during dosimetry may vary considerably between independent sessions and different centers. In this work the effect of water temperature on absolute dosimetry has been investigated. Density of water varies with temperature, which in turn may impact the beam attenuation and scatter properties. Furthermore, due to thermal expansion or contraction air volume inside the chamber may change. All of these effects can result in a change in the measurement. Dosimetric measurements were made using a Farmer type ion chamber on a Varian Linear Accelerator for 6 MV and 23 MV photon energies for temperatures ranging from 10 to 40 °C. A thermal insulation was designed for the water tank in order to maintain relatively stable temperature over the duration of the experiment. Dose measured at higher temperatures were found to be consistently higher by a very small magnitude. Although the differences in dose were less than the uncertainty in each measurement, a linear regression of the data suggests that the trend is statistically significant with p-values of 0.002 and 0.013 for 6 and 23 MV beams respectively. For a 10 degree difference in water phantom temperatures, which is a realistic deviation across clinics, the final calculated reference dose can differ by 0.24% or more. To address this effect, first a reference temperature (e.g.22 °C) can be set as the standard; subsequently a correction factor can be implemented for deviations from this reference. Such a correction factor is expected to be of similar magnitude as existing TG 51 recommended correction factors.
Noto, Nobutaka; Kato, Masataka; Abe, Yuriko; Kamiyama, Hiroshi; Karasawa, Kensuke; Ayusawa, Mamoru; Takahashi, Shori
2015-01-01
Previous studies that used carotid ultrasound have been largely conflicting in regards to whether or not patients after Kawasaki disease (KD) have a greater carotid intima-media thickness (CIMT) than controls. To test the hypothesis that there are significant differences between the values of CIMT expressed as absolute values and standard deviation scores (SDS) in children and adolescents after KD and controls, we reviewed 12 published articles regarding CIMT on KD patients and controls. The mean ± SD of absolute CIMT (mm) in the KD patients and controls obtained from each article was transformed to SDS (CIMT-SDS) using age-specific reference values established by Jourdan et al. (J: n = 247) and our own data (N: n = 175), and the results among these 12 articles were compared between the two groups and the references for comparison of racial disparities. There were no significant differences in mean absolute CIMT and mean CIMT-SDS for J between KD patients and controls (0.46 ± 0.06 mm vs. 0.44 ± 0.04 mm, p = 0.133, and 1.80 ± 0.84 vs. 1.25 ± 0.12, p = 0.159, respectively). However, there were significant differences in mean CIMT-SDS for N between KD patients and controls (0.60 ± 0.71 vs. 0.01 ± 0.65, p = 0.042). When we assessed the nine articles on Asian subjects, the difference of CIMT-SDS between the two groups was invariably significant only for N (p = 0.015). Compared with the reference values, CIMT-SDS of controls was within the normal range at a rate of 41.6 % for J and 91.6 % for N. These results indicate that age- and race-specific reference values for CIMT are mandatory for performing accurate assessment of the vascular status in healthy children and adolescents, particularly in those after KD considered at increased long-term cardiovascular risk.
Reproducibility of visual acuity assessment in normal and low visual acuity.
Becker, Ralph; Teichler, Gunnar; Gräf, Michael
2007-01-01
To assess the reproducibility of measurements of visual acuity in both the upper and lower range of visual acuity. The retroilluminated ETDRS 1 and ETDRS 2 charts (Precision Vision) were used for measurement of visual acuity. Both charts use the same letters. The sequence of the charts followed a pseudorandomized protocol. The examination distance was 4.0 m. When the visual acuity was below 0.16 or 0.03, then the examination distance was reduced to 1 m or 0.4 m, respectively, using an appropriate near correction. Visual acuity measurements obtained during the same session with both charts were compared. A total of 100 patients (age 8-90 years; median 60.5) with various eye disorders, including 39 with amblyopia due to strabismus, were tested in addition to 13 healthy volunteers (age 18-33 years; median 24). At least 3 out of 5 optotypes per line had to be correctly identified to pass this line. Wrong answers were monitored. The interpolated logMAR score was calculated. In the patients, the eye with the lower visual acuity was assessed, and for the healthy subjects the right eye. Differences between ETDRS 1 and ETDRS 2-acuity were compared. The mean logMAR values for ETDRS 1 and ETDRS 2 were -0.17 and -0.14 in the healthy eyes and 0.55 and 0.57 in the entire group. The absolute difference between ETDRS 1 and ETDRS 2 was (mean +/- standard deviation) 0.051 +/- 0.04 for the healthy eyes and 0.063 +/- 0.05 in the entire group. In the acuity range below 0.1 (logMAR > 1.0), the absolute difference (mean +/- standard deviation) between ETDRS 1 and ETDRS 2 of 0.072 +/- 0.04 did not significantly exceed the mean absolute difference in healthy eyes (p = 0.17). Regression analysis (|ETDRS 1 - ETDRS 2| vs. ETDRS 1) showed a slight increase of the difference between the two values with lower visual acuity (p = 0.0505; r = 0.18). Assuming correct measurement, the reproducibilty of visual acuity measurements in the lower acuity range is not significantly worse than in normals.
Ho, Bernard; Chao, Minh; Zhang, Hong Lei; Watts, Richard; Prince, Martin R
2003-01-01
To evaluate recessed elliptical centric ordering of k-space in renal magnetic resonance (MR) angiography. All imaging was performed on the same 1.5 T MR imaging system (GE Signa CVi) using the body coil for signal transmission and a phased array coil for reception. Gd, 30 ml, was injected manually at 2 ml/sec timed with automatic triggering (SmartPrep). In thirty patients using standard elliptical centric ordering, the scanner paused 8 seconds between detection of the leading edge of the Gd bolus and initiation of scanning beginning with the center of k-space. For the recessed-elliptical centric ordering in 20 consecutive patients, this delay was reduced to 4 seconds but the absolute center of k-space recessed in by 4 seconds such that in all patients the absolute center of k-space was acquired 8 seconds after detecting the leading edge of the bolus. On the arterial phase images signal-to-noise ratio (SNR) was measured in the aorta, each renal artery and vein and contrast-to-noise ratio (CNR) was measured relative to subcutaneous fat. The standard deviation of signal outside the patient was considered to be "noise" for calculation of SNR and CNR. Incidence of ringing artifact in the aorta and renal veins was noted. Aorta SNR and CNR was significantly higher with the recessed technique (p = 0.02) and the ratio of renal artery signal to renal vein signal was higher with the recessed technique, 4 ± 2, compared to standard elliptical centric, 3 ± 2 (p = 0.03). Ringing artifact was also reduced with the recessed technique in both the aorta and renal veins. Gadolinium-enhanced renal MR angiography is improved by recessing the absolute center of k-space.
Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
Yang, Lijun; Wu, Xuejian; Wei, Haoyun; Li, Yan
2017-04-10
The absolute group refractive index of air at 194061.02 GHz is measured in real time using frequency-sweeping interferometry calibrated by an optical frequency comb. The group refractive index of air is calculated from the calibration peaks of the laser frequency variation and the interference signal of the two beams passing through the inner and outer regions of a vacuum cell when the frequency of a tunable external cavity diode laser is scanned. We continuously measure the refractive index of air for 2 h, which shows that the difference between measured results and Ciddor's equation is less than 9.6×10-8, and the standard deviation of that difference is 5.9×10-8. The relative uncertainty of the measured refractive index of air is estimated to be 8.6×10-8. The data update rate is 0.2 Hz, making it applicable under conditions in which air refractive index fluctuates fast.
Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M; Teichgraeber, John F; Gateno, Jaime; Xia, James J
2017-12-01
There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities.
Yuan, Peng; Mai, Huaming; Li, Jianfu; Ho, Dennis Chun-Yu; Lai, Yingying; Liu, Siting; Kim, Daeseung; Xiong, Zixiang; Alfi, David M.; Teichgraeber, John F.; Gateno, Jaime
2017-01-01
Purpose There are many proven problems associated with traditional surgical planning methods for orthognathic surgery. To address these problems, we developed a computer-aided surgical simulation (CASS) system, the AnatomicAligner, to plan orthognathic surgery following our streamlined clinical protocol. Methods The system includes six modules: image segmentation and three-dimensional (3D) reconstruction, registration and reorientation of models to neutral head posture, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint generation. The accuracy of the system was validated in a stepwise fashion: first to evaluate the accuracy of AnatomicAligner using 30 sets of patient data, then to evaluate the fitting of splints generated by AnatomicAligner using 10 sets of patient data. The industrial gold standard system, Mimics, was used as the reference. Result When comparing the results of segmentation, virtual osteotomy and transformation achieved with AnatomicAligner to the ones achieved with Mimics, the absolute deviation between the two systems was clinically insignificant. The average surface deviation between the two models after 3D model reconstruction in AnatomicAligner and Mimics was 0.3 mm with a standard deviation (SD) of 0.03 mm. All the average surface deviations between the two models after virtual osteotomy and transformations were smaller than 0.01 mm with a SD of 0.01 mm. In addition, the fitting of splints generated by AnatomicAligner was at least as good as the ones generated by Mimics. Conclusion We successfully developed a CASS system, the AnatomicAligner, for planning orthognathic surgery following the streamlined planning protocol. The system has been proven accurate. AnatomicAligner will soon be available freely to the boarder clinical and research communities. PMID:28432489
NASA Astrophysics Data System (ADS)
Wang, C.; Gordon, R. G.; Zheng, L.
2016-12-01
Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.
NASA Astrophysics Data System (ADS)
Buchholz, Bernhard; Ebert, Volker
2018-01-01
Highly accurate water vapor measurements are indispensable for understanding a variety of scientific questions as well as industrial processes. While in metrology water vapor concentrations can be defined, generated, and measured with relative uncertainties in the single percentage range, field-deployable airborne instruments deviate even under quasistatic laboratory conditions up to 10-20 %. The novel SEALDH-II hygrometer, a calibration-free, tuneable diode laser spectrometer, bridges this gap by implementing a new holistic concept to achieve higher accuracy levels in the field. We present in this paper the absolute validation of SEALDH-II at a traceable humidity generator during 23 days of permanent operation at 15 different H2O mole fraction levels between 5 and 1200 ppmv. At each mole fraction level, we studied the pressure dependence at six different gas pressures between 65 and 950 hPa. Further, we describe the setup for this metrological validation, the challenges to overcome when assessing water vapor measurements on a high accuracy level, and the comparison results. With this validation, SEALDH-II is the first airborne, metrologically validated humidity transfer standard which links several scientific airborne and laboratory measurement campaigns to the international metrological water vapor scale.
NASA Astrophysics Data System (ADS)
Francis, Olivier; Baumann, Henri; Volarik, Tomas; Rothleitner, Christian; Klein, Gilbert; Seil, Marc; Dando, Nicolas; Tracey, Ray; Ullrich, Christian; Castelein, Stefaan; Hua, Hu; Kang, Wu; Chongyang, Shen; Songbo, Xuan; Hongbo, Tan; Zhengyuan, Li; Pálinkás, Vojtech; Kostelecký, Jakub; Mäkinen, Jaakko; Näränen, Jyri; Merlet, Sébastien; Farah, Tristan; Guerlin, Christine; Pereira Dos Santos, Franck; Le Moigne, Nicolas; Champollion, Cédric; Deville, Sabrina; Timmen, Ludger; Falk, Reinhard; Wilmes, Herbert; Iacovone, Domenico; Baccaro, Francesco; Germak, Alessandro; Biolcati, Emanuele; Krynski, Jan; Sekowski, Marcin; Olszak, Tomasz; Pachuta, Andrzej; Agren, Jonas; Engfeldt, Andreas; Reudink, René; Inacio, Pedro; McLaughlin, Daniel; Shannon, Geoff; Eckl, Marc; Wilkins, Tim; van Westrum, Derek; Billson, Ryan
2013-06-01
We present the results of the third European Comparison of Absolute Gravimeters held in Walferdange, Grand Duchy of Luxembourg, in November 2011. Twenty-two gravimeters from both metrological and non-metrological institutes are compared. For the first time, corrections for the laser beam diffraction and the self-attraction of the gravimeters are implemented. The gravity observations are also corrected for geophysical gravity changes that occurred during the comparison using the observations of a superconducting gravimeter. We show that these corrections improve the degree of equivalence between the gravimeters. We present the results for two different combinations of data. In the first one, we use only the observations from the metrological institutes. In the second solution, we include all the data from both metrological and non-metrological institutes. Those solutions are then compared with the official result of the comparison published previously and based on the observations of the metrological institutes and the gravity differences at the different sites as measured by non-metrological institutes. Overall, the absolute gravity meters agree with one another with a standard deviation of 3.1 µGal. Finally, the results of this comparison are linked to previous ones. We conclude with some important recommendations for future comparisons.
[Gas chromatography in quantitative analysis of hydrocyanic acid and its salts in cadaveric blood].
Iablochkin, V D
2003-01-01
A direct gas chromatography method was designed for the quantitative determination of cyanides (prussic acid) in cadaveric blood. Its sensitivity is 0.05 mg/ml. The routine volatile products, including substances, which emerge due to putrefaction of organic matters, do not affect the accuracy and reproducibility of the method; the exception is H-propanol that was used as the internal standard. The method was used in legal chemical expertise related with acute cyanide poisoning (suicide) as well as with poisoning of products of combustion of nonmetals (foam-rubber). The absolute error does not exceed 10% with a mean quadratic deviation of 0.0029-0.0033 mg.
Resistance Training Increases the Variability of Strength Test Scores
2009-06-08
standard deviations for pretest and posttest strength measurements. This information was recorded for every strength test used in a total of 377 samples...significant if the posttest standard deviation consistently was larger than the pretest standard deviation. This condition could be satisfied even if...the difference in the standard deviations was small. For example, the posttest standard deviation might be 1% larger than the pretest standard
NASA Astrophysics Data System (ADS)
Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.
2008-08-01
Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2017-11-01
Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cardiopulmonary fitness and muscle strength in patients with osteogenesis imperfecta type I.
Takken, Tim; Terlingen, Heike C; Helders, Paul J M; Pruijs, Hans; Van der Ent, Cornelis K; Engelbert, Raoul H H
2004-12-01
To evaluate cardiopulmonary function, muscle strength, and cardiopulmonary fitness (VO 2 peak) in patients with osteogenesis imperfecta (OI). In 17 patients with OI type I (mean age 13.3 +/- 3.9 years) cardiopulmonary function was assessed at rest using spirometry, plethysmography, electrocardiography, and echocardiography. Exercise capacity was measured using a maximal exercise test on a bicycle ergometer and an expired gas analysis system. Muscle strength in shoulder abductors, hip flexors, ankle dorsal flexor, and grip strength were measured. All results were compared with reference values. Cardiopulmonary function at rest was within normal ranges, but when it was compared with normal height for age and sex, vital capacities were reduced. Mean absolute and relative VO 2 peak were respectively -1.17 (+/- 0.67) and -1.41 (+/- 1.52) standard deviations lower compared with reference values ( P < .01). Muscle strength also was significantly reduced in patients with OI, ranging from -1.24 +/- 1.40 to -2.88 +/- 2.67 standard deviations lower compared with reference values. In patients with OI type I, no pulmonary or cardiac abnormalities at rest were found. The exercise tolerance and muscle strength were significantly reduced in patients with OI, which might account for their increased levels of fatigue during activities of daily living.
Doblas, Sabrina; Almeida, Gilberto S; Blé, François-Xavier; Garteiser, Philippe; Hoff, Benjamin A; McIntyre, Dominick J O; Wachsmuth, Lydia; Chenevert, Thomas L; Faber, Cornelius; Griffiths, John R; Jacobs, Andreas H; Morris, David M; O'Connor, James P B; Robinson, Simon P; Van Beers, Bernard E; Waterton, John C
2015-12-01
To evaluate between-site agreement of apparent diffusion coefficient (ADC) measurements in preclinical magnetic resonance imaging (MRI) systems. A miniaturized thermally stable ice-water phantom was devised. ADC (mean and interquartile range) was measured over several days, on 4.7T, 7T, and 9.4T Bruker, Agilent, and Magnex small-animal MRI systems using a common protocol across seven sites. Day-to-day repeatability was expressed as percent variation of mean ADC between acquisitions. Cross-site reproducibility was expressed as 1.96 × standard deviation of percent deviation of ADC values. ADC measurements were equivalent across all seven sites with a cross-site ADC reproducibility of 6.3%. Mean day-to-day repeatability of ADC measurements was 2.3%, and no site was identified as presenting different measurements than others (analysis of variance [ANOVA] P = 0.02, post-hoc test n.s.). Between-slice ADC variability was negligible and similar between sites (P = 0.15). Mean within-region-of-interest ADC variability was 5.5%, with one site presenting a significantly greater variation than the others (P = 0.0013). Absolute ADC values in preclinical studies are comparable between sites and equipment, provided standardized protocols are employed. © 2015 Wiley Periodicals, Inc.
7 CFR 400.204 - Notification of deviation from standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
NASA Astrophysics Data System (ADS)
van den Heuvel, Thomas L. A.; Petros, Hezkiel; Santini, Stefano; de Korte, Chris L.; van Ginneken, Bram
2017-03-01
Worldwide, 99% of all maternal deaths occur in low-resource countries. Ultrasound imaging can be used to detect maternal risk factors, but requires a well-trained sonographer to obtain the biometric parameters of the fetus. One of the most important biometric parameters is the fetal Head Circumference (HC). The HC can be used to estimate the Gestational Age (GA) and assess the growth of the fetus. In this paper we propose a method to estimate the fetal HC with the use of the Obstetric Sweep Protocol (OSP). With the OSP the abdomen of pregnant women is imaged with the use of sweeps. These sweeps can be taught to somebody without any prior knowledge of ultrasound within a day. Both the OSP and the standard two-dimensional ultrasound image for HC assessment were acquired by an experienced gynecologist from fifty pregnant women in St. Luke's Hospital in Wolisso, Ethiopia. The reference HC from the standard two-dimensional ultrasound image was compared to both the manually measured HC and the automatically measured HC from the OSP data. The median difference between the estimated GA from the manual measured HC using the OSP and the reference standard was -1.1 days (Median Absolute Deviation (MAD) 7.7 days). The median difference between the estimated GA from the automatically measured HC using the OSP and the reference standard was -6.2 days (MAD 8.6 days). Therefore, it can be concluded that it is possible to estimate the fetal GA with simple obstetric sweeps with a deviation of only one week.
NASA Astrophysics Data System (ADS)
Olesen, M.; Christensen, J. H.; Boberg, F.
2016-12-01
Climate change indices for Greenland applied directly for other arctic regions - Enhanced and utilized climate information from one high resolution RCM downscaling for Greenland evaluated through pattern scaling and CMIP5Climate change affects the Greenlandic society both advantageously and disadvantageously. Changes in temperature and precipitation patterns may result in changes in a number of derived society related climate indices, such as the length of growing season or the number of annual dry days or a combination of the two - indices of substantial importance to society in a climate adaptation context.Detailed climate indices require high resolution downscaling. We have carried out a very high resolution (5 km) simulation with the regional climate model HIRHAM5, forced by the global model EC-Earth. Evaluation of RCM output is usually done with an ensemble of downscaled output with multiple RCM's and GCM's. Here we have introduced and tested a new technique; a translation of the robustness of an ensemble of GCM models from CMIP5 into the specific index from the HIRHAM5 downscaling through a correlation between absolute temperatures and its corresponding index values from the HIRHAM5 output.The procedure is basically conducted in two steps: First, the correlation between temperature and a given index for the HIRHAM5 simulation by a best fit to a second order polynomial is identified. Second, the standard deviation from the CMIP5 simulations is introduced to show the corresponding standard deviation of the index from the HIRHAM5 run. The change of specific climate indices due to global warming will then be possible to evaluate elsewhere corresponding to the change in absolute temperature.Results based on selected indices with focus on the future climate in Greenland calculated for the rcp4.5 and rcp8.5 scenarios will be presented.
Predicting Accommodative Response Using Paraxial Schematic Eye Models
Ramasubramanian, Viswanathan; Glasser, Adrian
2016-01-01
Purpose Prior ultrasound biomicroscopy (UBM) studies showed that accommodative optical response (AOR) can be predicted from accommodative biometric changes in a young and a pre-presbyopic population from linear relationships between accommodative optical and biometric changes, with a standard deviation of less than 0.55D. Here, paraxial schematic eyes (SE) were constructed from measured accommodative ocular biometry parameters to see if predictions are improved. Methods Measured ocular biometry (OCT, A-scan and UBM) parameters from 24 young and 24 pre-presbyopic subjects were used to construct paraxial SEs for each individual subject (individual SEs) for three different lens equivalent refractive index methods. Refraction and AOR calculated from the individual SEs were compared with Grand Seiko (GS) autorefractor measured refraction and AOR. Refraction and AOR were also calculated from individual SEs constructed using the average population accommodative change in UBM measured parameters (average SEs). Results Schematic eye calculated and GS measured AOR were linearly related (young subjects: slope = 0.77; r2 = 0.86; pre-presbyopic subjects: slope = 0.64; r2 = 0.55). The mean difference in AOR (GS - individual SEs) for the young subjects was −0.27D and for the pre-presbyopic subjects was 0.33D. For individual SEs, the mean ± SD of the absolute differences in AOR between the GS and SEs was 0.50 ± 0.39D for the young subjects and 0.50 ± 0.37D for the pre-presbyopic subjects. For average SEs, the mean ± SD of the absolute differences in AOR between the GS and the SEs was 0.77 ± 0.88D for the young subjects and 0.51 ± 0.49D for the pre-presbyopic subjects. Conclusions Individual paraxial SEs predict AOR, on average, with a standard deviation of 0.50D in young and pre-presbyopic subject populations. Although this prediction is only marginally better than from individual linear regressions, it does consider all the ocular biometric parameters. PMID:27092928
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Control of the interaction strength of photonic molecules by nanometer precise 3D fabrication.
Rawlings, Colin D; Zientek, Michal; Spieser, Martin; Urbonas, Darius; Stöferle, Thilo; Mahrt, Rainer F; Lisunova, Yuliya; Brugger, Juergen; Duerig, Urs; Knoll, Armin W
2017-11-28
Applications for high resolution 3D profiles, so-called grayscale lithography, exist in diverse fields such as optics, nanofluidics and tribology. All of them require the fabrication of patterns with reliable absolute patterning depth independent of the substrate location and target materials. Here we present a complete patterning and pattern-transfer solution based on thermal scanning probe lithography (t-SPL) and dry etching. We demonstrate the fabrication of 3D profiles in silicon and silicon oxide with nanometer scale accuracy of absolute depth levels. An accuracy of less than 1nm standard deviation in t-SPL is achieved by providing an accurate physical model of the writing process to a model-based implementation of a closed-loop lithography process. For transfering the pattern to a target substrate we optimized the etch process and demonstrate linear amplification of grayscale patterns into silicon and silicon oxide with amplification ratios of ∼6 and ∼1, respectively. The performance of the entire process is demonstrated by manufacturing photonic molecules of desired interaction strength. Excellent agreement of fabricated and simulated structures has been achieved.
Taghizadeh, Somayeh; Yang, Claus Chunli; R. Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan
2017-01-01
Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID3D and Quasar GRID3D phantoms were used to evaluate the effects of static magnetic field (B0) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions. PMID:29487771
Fatemi, Ali; Taghizadeh, Somayeh; Yang, Claus Chunli; R Kanakamedala, Madhava; Morris, Bart; Vijayakumar, Srinivasan
2017-12-18
Purpose Magnetic resonance (MR) images are necessary for accurate contouring of intracranial targets, determination of gross target volume and evaluation of organs at risk during stereotactic radiosurgery (SRS) treatment planning procedures. Many centers use magnetic resonance imaging (MRI) simulators or regular diagnostic MRI machines for SRS treatment planning; while both types of machine require two stages of quality control (QC), both machine- and patient-specific, before use for SRS, no accepted guidelines for such QC currently exist. This article describes appropriate machine-specific QC procedures for SRS applications. Methods and materials We describe the adaptation of American College of Radiology (ACR)-recommended QC tests using an ACR MRI phantom for SRS treatment planning. In addition, commercial Quasar MRID 3D and Quasar GRID 3D phantoms were used to evaluate the effects of static magnetic field (B 0 ) inhomogeneity, gradient nonlinearity, and a Leksell G frame (SRS frame) and its accessories on geometrical distortion in MR images. Results QC procedures found in-plane distortions (Maximum = 3.5 mm, Mean = 0.91 mm, Standard deviation = 0.67 mm, >2.5 mm (%) = 2) in X-direction (Maximum = 2.51 mm, Mean = 0.52 mm, Standard deviation = 0.39 mm, > 2.5 mm (%) = 0) and in Y-direction (Maximum = 13. 1 mm , Mean = 2.38 mm, Standard deviation = 2.45 mm, > 2.5 mm (%) = 34) in Z-direction and < 1 mm distortion at a head-sized region of interest. MR images acquired using a Leksell G frame and localization devices showed a mean absolute deviation of 2.3 mm from isocenter. The results of modified ACR tests were all within recommended limits, and baseline measurements have been defined for regular weekly QC tests. Conclusions With appropriate QC procedures in place, it is possible to routinely obtain clinically useful MR images suitable for SRS treatment planning purposes. MRI examination for SRS planning can benefit from the improved localization and planning possible with the superior image quality and soft tissue contrast achieved under optimal conditions.
Boonen, Bert; Schotanus, Martijn G M; Kerens, Bart; Hulsmans, Frans-Jan; Tuinebreijer, Wim E; Kort, Nanne P
2017-09-01
To assess whether there is a significant difference between the alignment of the individual femoral and tibial components (in the frontal, sagittal and horizontal planes) as calculated pre-operatively (digital plan) and the actually achieved alignment in vivo obtained with the use of patient-specific positioning guides (PSPGs) for TKA. It was hypothesised that there would be no difference between post-op implant position and pre-op digital plan. Twenty-six patients were included in this non-inferiority trial. Software permitted matching of the pre-operative MRI scan (and therefore calculated prosthesis position) to a pre-operative CT scan and then to a post-operative full-leg CT scan to determine deviations from pre-op planning in all three anatomical planes. For the femoral component, mean absolute deviations from planning were 1.8° (SD 1.3), 2.5° (SD 1.6) and 1.6° (SD 1.4) in the frontal, sagittal and transverse planes, respectively. For the tibial component, mean absolute deviations from planning were 1.7° (SD 1.2), 1.7° (SD 1.5) and 3.2° (SD 3.6) in the frontal, sagittal and transverse planes, respectively. Absolute mean deviation from planned mechanical axis was 1.9°. The a priori specified null hypothesis for equivalence testing: the difference from planning is >3 or <-3 was rejected for all comparisons except for the tibial transverse plane. PSPG was able to adequately reproduce the pre-op plan in all planes, except for the tibial rotation in the transverse plane. Possible explanations for outliers are discussed and highlight the importance for adequate training surgeons before they start using PSPG in their day-by-day practise. Prospective cohort study, Level II.
Martinon, Alice; Cronin, Ultan P; Wilkinson, Martin G
2012-01-01
In this article, four types of standards were assessed in a SYBR Green-based real-time PCR procedure for the quantification of Staphylococcus aureus (S. aureus) in DNA samples. The standards were purified S. aureus genomic DNA (type A), circular plasmid DNA containing a thermonuclease (nuc) gene fragment (type B), DNA extracted from defined populations of S. aureus cells generated by Fluorescence Activated Cell Sorting (FACS) technology with (type C) or without purification of DNA by boiling (type D). The optimal efficiency of 2.016 was obtained on Roche LightCycler(®) 4.1. software for type C standards, whereas the lowest efficiency (1.682) corresponded to type D standards. Type C standards appeared to be more suitable for quantitative real-time PCR because of the use of defined populations for construction of standard curves. Overall, Fieller Confidence Interval algorithm may be improved for replicates having a low standard deviation in Cycle Threshold values such as found for type B and C standards. Stabilities of diluted PCR standards stored at -20°C were compared after 0, 7, 14 and 30 days and were lower for type A or C standards compared with type B standards. However, FACS generated standards may be useful for bacterial quantification in real-time PCR assays once optimal storage and temperature conditions are defined.
7 CFR 400.174 - Notification of deviation from financial standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from financial standards... Agreement-Standards for Approval; Regulations for the 1997 and Subsequent Reinsurance Years § 400.174 Notification of deviation from financial standards. An insurer must immediately advise FCIC if it deviates from...
Fault Identification Based on Nlpca in Complex Electrical Engineering
NASA Astrophysics Data System (ADS)
Zhang, Yagang; Wang, Zengping; Zhang, Jinfang
2012-07-01
The fault is inevitable in any complex systems engineering. Electric power system is essentially a typically nonlinear system. It is also one of the most complex artificial systems in this world. In our researches, based on the real-time measurements of phasor measurement unit, under the influence of white Gaussian noise (suppose the standard deviation is 0.01, and the mean error is 0), we used mainly nonlinear principal component analysis theory (NLPCA) to resolve fault identification problem in complex electrical engineering. The simulation results show that the fault in complex electrical engineering is usually corresponding to the variable with the maximum absolute value coefficient in the first principal component. These researches will have significant theoretical value and engineering practical significance.
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Big data driven cycle time parallel prediction for production planning in wafer manufacturing
NASA Astrophysics Data System (ADS)
Wang, Junliang; Yang, Jungang; Zhang, Jie; Wang, Xiaoxi; Zhang, Wenjun Chris
2018-07-01
Cycle time forecasting (CTF) is one of the most crucial issues for production planning to keep high delivery reliability in semiconductor wafer fabrication systems (SWFS). This paper proposes a novel data-intensive cycle time (CT) prediction system with parallel computing to rapidly forecast the CT of wafer lots with large datasets. First, a density peak based radial basis function network (DP-RBFN) is designed to forecast the CT with the diverse and agglomerative CT data. Second, the network learning method based on a clustering technique is proposed to determine the density peak. Third, a parallel computing approach for network training is proposed in order to speed up the training process with large scaled CT data. Finally, an experiment with respect to SWFS is presented, which demonstrates that the proposed CTF system can not only speed up the training process of the model but also outperform the radial basis function network, the back-propagation-network and multivariate regression methodology based CTF methods in terms of the mean absolute deviation and standard deviation.
Detection of Epileptic Seizure Event and Onset Using EEG
Ahammad, Nabeel; Fathima, Thasneem; Joseph, Paul
2014-01-01
This study proposes a method of automatic detection of epileptic seizure event and onset using wavelet based features and certain statistical features without wavelet decomposition. Normal and epileptic EEG signals were classified using linear classifier. For seizure event detection, Bonn University EEG database has been used. Three types of EEG signals (EEG signal recorded from healthy volunteer with eye open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. Important features such as energy, entropy, standard deviation, maximum, minimum, and mean at different subbands were computed and classification was done using linear classifier. The performance of classifier was determined in terms of specificity, sensitivity, and accuracy. The overall accuracy was 84.2%. In the case of seizure onset detection, the database used is CHB-MIT scalp EEG database. Along with wavelet based features, interquartile range (IQR) and mean absolute deviation (MAD) without wavelet decomposition were extracted. Latency was used to study the performance of seizure onset detection. Classifier gave a sensitivity of 98.5% with an average latency of 1.76 seconds. PMID:24616892
Point-based and model-based geolocation analysis of airborne laser scanning data
NASA Astrophysics Data System (ADS)
Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet
2017-01-01
Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.
Liu, Jin-Ya; Chen, Li-Da; Cai, Hua-Song; Liang, Jin-Yu; Xu, Ming; Huang, Yang; Li, Wei; Feng, Shi-Ting; Xie, Xiao-Yan; Lu, Ming-De; Wang, Wei
2016-01-01
AIM: To present our initial experience regarding the feasibility of ultrasound virtual endoscopy (USVE) and its measurement reliability for polyp detection in an in vitro study using pig intestine specimens. METHODS: Six porcine intestine specimens containing 30 synthetic polyps underwent USVE, computed tomography colonography (CTC) and optical colonoscopy (OC) for polyp detection. The polyp measurement defined as the maximum polyp diameter on two-dimensional (2D) multiplanar reformatted (MPR) planes was obtained by USVE, and the absolute measurement error was analyzed using the direct measurement as the reference standard. RESULTS: USVE detected 29 (96.7%) of 30 polyps, remaining a 7-mm one missed. There was one false-positive finding. Twenty-six (89.7%) of 29 reconstructed images were clearly depicted, while 29 (96.7%) of 30 polyps were displayed on CTC with one false-negative finding. In OC, all the polyps were detected. The intraclass correlation coefficient was 0.876 (95%CI: 0.745-0.940) for measurements obtained with USVE. The pooled absolute measurement errors ± the standard deviations of the depicted polyps with actual sizes ≤ 5 mm, 6-9 mm, and ≥ 10 mm were 1.9 ± 0.8 mm, 0.9 ± 1.2 mm, and 1.0 ± 1.4 mm, respectively. CONCLUSION: USVE is reliable for polyp detection and measurement in in vitro study. PMID:27022217
Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J; Murtha, Michael T; Hus, Vanessa; Lowe, Jennifer K; Willsey, A Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E; Ledbetter, David H; Lord, Catherine; Mane, Shrikant M; Lese Martin, Christa; Martin, Donna M; Morrow, Eric M; Walsh, Christopher A; Sutcliffe, James S; State, Matthew W; Devlin, Bernie; Cook, Edwin H; Kim, Soo-Jeong
2013-10-15
Brain development follows a different trajectory in children with autism spectrum disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. Gender, age, height, weight, genetic ancestry, and ASD status were significant predictors of HC (estimate of the ASD effect = .2 cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait, and population norms for HC would be far more accurate if covariates including genetic ancestry, height, and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. © 2013 Society of Biological Psychiatry.
Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong
2013-01-01
BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936
Alghanim, Hussain; Antunes, Joana; Silva, Deborah Soares Bispo Santos; Alho, Clarice Sampaio; Balamurugan, Kuppareddi; McCord, Bruce
2017-11-01
Recent developments in the analysis of epigenetic DNA methylation patterns have demonstrated that certain genetic loci show a linear correlation with chronological age. It is the goal of this study to identify a new set of epigenetic methylation markers for the forensic estimation of human age. A total number of 27 CpG sites at three genetic loci, SCGN, DLX5 and KLF14, were examined to evaluate the correlation of their methylation status with age. These sites were evaluated using 72 blood samples and 91 saliva samples collected from volunteers with ages ranging from 5 to 73 years. DNA was bisulfite modified followed by PCR amplification and pyrosequencing to determine the level of DNA methylation at each CpG site. In this study, certain CpG sites in SCGN and KLF14 loci showed methylation levels that were correlated with chronological age, however, the tested CpG sites in DLX5 did not show a correlation with age. Using a 52-saliva sample training set, two age-predictor models were developed by means of a multivariate linear regression analysis for age prediction. The two models performed similarly with a single-locus model explaining 85% of the age variance at a mean absolute deviation of 5.8 years and a dual-locus model explaining 84% of the age variance with a mean absolute deviation of 6.2 years. In the validation set, the mean absolute deviation was measured to be 8.0 years and 7.1 years for the single- and dual-locus model, respectively. Another age predictor model was also developed using a 40-blood sample training set that accounted for 71% of the age variance. This model gave a mean absolute deviation of 6.6 years for the training set and 10.3years for the validation set. The results indicate that specific CpGs in SCGN and KLF14 can be used as potential epigenetic markers to estimate age using saliva and blood specimens. These epigenetic markers could provide important information in cases where the determination of a suspect's age is critical in developing investigative leads. Copyright © 2017. Published by Elsevier B.V.
1 CFR 21.14 - Deviations from standard organization of the Code of Federal Regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Deviations from standard organization of the... CODIFICATION General Numbering § 21.14 Deviations from standard organization of the Code of Federal Regulations. (a) Any deviation from standard Code of Federal Regulations designations must be approved in advance...
A critical assessment of two types of personal UV dosimeters.
Seckmeyer, Gunther; Klingebiel, Marcus; Riechelmann, Stefan; Lohse, Insa; McKenzie, Richard L; Liley, J Ben; Allen, Martin W; Siani, Anna-Maria; Casale, Giuseppe R
2012-01-01
Doses of erythemally weighted irradiances derived from polysulphone (PS) and electronic ultraviolet (EUV) dosimeters have been compared with measurements obtained using a reference spectroradiometer. PS dosimeters showed mean absolute deviations of 26% with a maximum deviation of 44%, the calibrated EUV dosimeters showed mean absolute deviations of 15% (maximum 33%) around noon during several test days in the northern hemisphere autumn. In the case of EUV dosimeters, measurements with various cut-off filters showed that part of the deviation from the CIE erythema action spectrum was due to a small, but significant sensitivity to visible radiation that varies between devices and which may be avoided by careful preselection. Usually the method of calibrating UV sensors by direct comparison to a reference instrument leads to reliable results. However, in some circumstances the quality of measurements made with simple sensors may be over-estimated. In the extreme case, a simple pyranometer can be used as a UV instrument, providing acceptable results for cloudless skies, but very poor results under cloudy conditions. It is concluded that while UV dosimeters are useful for their design purpose, namely to estimate personal UV exposures, they should not be regarded as an inexpensive replacement for meteorological grade instruments. © 2011 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2011 The American Society of Photobiology.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
Upgraded FAA Airfield Capacity Model. Volume 1. Supplemental User’s Guide
1981-02-01
SIGMAR (P4.0) cc 1-4 -standard deviation, in seconds, of arrival runway occupancy time (R.O.T.). SIGMAA (F4.0) cc 5-8 -standard deviation, in seconds...iI SI GMAC - The standard deviation of the time from departure clearance to start of roll. SIGMAR - The standard deviation of the arrival runway
Mazumder, Avik; Gupta, Hemendra K; Garg, Prabhat; Jain, Rajeev; Dubey, Devendra K
2009-07-03
This paper details an on-flow liquid chromatography-ultraviolet-nuclear magnetic resonance (LC-UV-NMR) method for the retrospective detection and identification of alkyl alkylphosphonic acids (AAPAs) and alkylphosphonic acids (APAs), the markers of the toxic nerve agents for verification of the Chemical Weapons Convention (CWC). Initially, the LC-UV-NMR parameters were optimized for benzyl derivatives of the APAs and AAPAs. The optimized parameters include stationary phase C(18), mobile phase methanol:water 78:22 (v/v), UV detection at 268nm and (1)H NMR acquisition conditions. The protocol described herein allowed the detection of analytes through acquisition of high quality NMR spectra from the aqueous solution of the APAs and AAPAs with high concentrations of interfering background chemicals which have been removed by preceding sample preparation. The reported standard deviation for the quantification is related to the UV detector which showed relative standard deviations (RSDs) for quantification within +/-1.1%, while lower limit of detection upto 16mug (in mug absolute) for the NMR detector. Finally the developed LC-UV-NMR method was applied to identify the APAs and AAPAs in real water samples, consequent to solid phase extraction and derivatization. The method is fast (total experiment time approximately 2h), sensitive, rugged and efficient.
An image registration based ultrasound probe calibration
NASA Astrophysics Data System (ADS)
Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram
2012-02-01
Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
NASA Astrophysics Data System (ADS)
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Belief Propagation Algorithm for Portfolio Optimization Problems
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462
Belief Propagation Algorithm for Portfolio Optimization Problems.
Shinzato, Takashi; Yasuda, Muneki
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.
Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach
NASA Astrophysics Data System (ADS)
Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar
2010-10-01
To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.
A Visual Model for the Variance and Standard Deviation
ERIC Educational Resources Information Center
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
An isotope-dilution standard GC/MS/MS method for steroid hormones in water
Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.
2013-01-01
An isotope-dilution quantification method was developed for 20 natural and synthetic steroid hormones and additional compounds in filtered and unfiltered water. Deuterium- or carbon-13-labeled isotope-dilution standards (IDSs) are added to the water sample, which is passed through an octadecylsilyl solid-phase extraction (SPE) disk. Following extract cleanup using Florisil SPE, method compounds are converted to trimethylsilyl derivatives and analyzed by gas chromatography with tandem mass spectrometry. Validation matrices included reagent water, wastewater-affected surface water, and primary (no biological treatment) and secondary wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100%; with overall relative standard deviation of 28%. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples analyzed in 2009–2010 ranged from 84–104%, with relative standard deviations of 6–36%. Detection levels estimated using ASTM International’s D6091–07 procedure range from 0.4 to 4 ng/L for 17 analytes. Higher censoring levels of 100 ng/L for bisphenol A and 200 ng/L for cholesterol and 3-beta-coprostanol are used to prevent bias and false positives associated with the presence of these analytes in blanks. Absolute method recoveries of the IDSs provide sample-specific performance information and guide data reporting. Careful selection of labeled compounds for use as IDSs is important because both inexact IDS-analyte matches and deuterium label loss affect an IDS’s ability to emulate analyte performance. Six IDS compounds initially tested and applied in this method exhibited deuterium loss and are not used in the final method.
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Basic life support: evaluation of learning using simulation and immediate feedback devices1.
Tobase, Lucia; Peres, Heloisa Helena Ciqueto; Tomazini, Edenir Aparecida Sartorelli; Teodoro, Simone Valentim; Ramos, Meire Bruna; Polastri, Thatiane Facholi
2017-10-30
to evaluate students' learning in an online course on basic life support with immediate feedback devices, during a simulation of care during cardiorespiratory arrest. a quasi-experimental study, using a before-and-after design. An online course on basic life support was developed and administered to participants, as an educational intervention. Theoretical learning was evaluated by means of a pre- and post-test and, to verify the practice, simulation with immediate feedback devices was used. there were 62 participants, 87% female, 90% in the first and second year of college, with a mean age of 21.47 (standard deviation 2.39). With a 95% confidence level, the mean scores in the pre-test were 6.4 (standard deviation 1.61), and 9.3 in the post-test (standard deviation 0.82, p <0.001); in practice, 9.1 (standard deviation 0.95) with performance equivalent to basic cardiopulmonary resuscitation, according to the feedback device; 43.7 (standard deviation 26.86) mean duration of the compression cycle by second of 20.5 (standard deviation 9.47); number of compressions 167.2 (standard deviation 57.06); depth of compressions of 48.1 millimeter (standard deviation 10.49); volume of ventilation 742.7 (standard deviation 301.12); flow fraction percentage of 40.3 (standard deviation 10.03). the online course contributed to learning of basic life support. In view of the need for technological innovations in teaching and systematization of cardiopulmonary resuscitation, simulation and feedback devices are resources that favor learning and performance awareness in performing the maneuvers.
Joseph, Leena; Das, A P; Ravindra, Anuradha; Kulkarni, D B; Kulkarni, M S
2018-07-01
4πβ-γ coincidence method is a powerful and widely used method to determine the absolute activity concentration of radioactive solutions. A new automated liquid scintillator based coincidence system has been designed, developed, tested and established as absolute standard for radioactivity measurements. The automation is achieved using PLC (programmable logic controller) and SCADA (supervisory control and data acquisition). Radioactive solution of 60 Co was standardized to compare the performance of the automated system with proportional counter based absolute standard maintained in the laboratory. The activity concentrations determined using these two systems were in very good agreement; the new automated system can be used for absolute measurement of activity concentration of radioactive solutions. Copyright © 2018. Published by Elsevier Ltd.
Electronic Absolute Cartesian Autocollimator
NASA Technical Reports Server (NTRS)
Leviton, Douglas B.
2006-01-01
An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the mirror is slightly tilted. Hence, one can determine the amount and direction of tilt from the coordinates of the target image on the viewing plane.
Guang, Hui; Ji, Linhong; Shi, Yingying; Misgeld, Berno J E
2018-01-01
The robot-assisted therapy has been demonstrated to be effective in the improvements of limb function and even activities of daily living for patients after stroke. This paper presents an interactive upper-limb rehabilitation robot with a parallel mechanism and an isometric screen embedded in the platform to display trajectories. In the dynamic modeling for impedance control, the effects of friction and inertia are reduced by introducing the principle of virtual work and derivative of Jacobian matrix. To achieve the assist-as-needed impedance control for arbitrary trajectories, the strategy based on orthogonal deviations is proposed. Simulations and experiments were performed to validate the dynamic modeling and impedance control. Besides, to investigate the influence of the impedance in practice, a subject participated in experiments and performed two types of movements with the robot, that is, rectilinear and circular movements, under four conditions, that is, with/without resistance or impedance, respectively. The results showed that the impedance and resistance affected both mean absolute error and standard deviation of movements and also demonstrated the significant differences between movements with/without impedance and resistance ( p < 0.001). Furthermore, the error patterns were discussed, which suggested that the impedance environment was capable of alleviating movement deviations by compensating the synergetic inadequacy between the shoulder and elbow joints.
Shi, Yingying; Misgeld, Berno J. E.
2018-01-01
The robot-assisted therapy has been demonstrated to be effective in the improvements of limb function and even activities of daily living for patients after stroke. This paper presents an interactive upper-limb rehabilitation robot with a parallel mechanism and an isometric screen embedded in the platform to display trajectories. In the dynamic modeling for impedance control, the effects of friction and inertia are reduced by introducing the principle of virtual work and derivative of Jacobian matrix. To achieve the assist-as-needed impedance control for arbitrary trajectories, the strategy based on orthogonal deviations is proposed. Simulations and experiments were performed to validate the dynamic modeling and impedance control. Besides, to investigate the influence of the impedance in practice, a subject participated in experiments and performed two types of movements with the robot, that is, rectilinear and circular movements, under four conditions, that is, with/without resistance or impedance, respectively. The results showed that the impedance and resistance affected both mean absolute error and standard deviation of movements and also demonstrated the significant differences between movements with/without impedance and resistance (p < 0.001). Furthermore, the error patterns were discussed, which suggested that the impedance environment was capable of alleviating movement deviations by compensating the synergetic inadequacy between the shoulder and elbow joints. PMID:29850004
Resting-State Oscillatory Activity in Children Born Small for Gestational Age: An MEG Study
Boersma, Maria; de Bie, Henrica M. A.; Oostrom, Kim J.; van Dijk, Bob W.; Hillebrand, Arjan; van Wijk, Bernadette C. M.; Delemarre-van de Waal, Henriëtte A.; Stam, Cornelis J.
2013-01-01
Growth restriction in utero during a period that is critical for normal growth of the brain, has previously been associated with deviations in cognitive abilities and brain anatomical and functional changes. We measured magnetoencephalography (MEG) in 4- to 7-year-old children to test if children born small for gestational age (SGA) show deviations in resting-state brain oscillatory activity. Children born SGA with postnatally spontaneous catch-up growth [SGA+; six boys, seven girls; mean age 6.3 year (SD = 0.9)] and children born appropriate for gestational age [AGA; seven boys, three girls; mean age 6.0 year (SD = 1.2)] participated in a resting-state MEG study. We calculated absolute and relative power spectra and used non-parametric statistics to test for group differences. SGA+ and AGA born children showed no significant differences in absolute and relative power except for reduced absolute gamma band power in SGA children. At the time of MEG investigation, SGA+ children showed significantly lower head circumference (HC) and a trend toward lower IQ, however there was no association of HC or IQ with absolute or relative power. Except for reduced absolute gamma band power, our findings suggest normal brain activity patterns at school age in a group of children born SGA in which spontaneous catch-up growth of bodily length after birth occurred. Although previous findings suggest that being born SGA alters brain oscillatory activity early in neonatal life, we show that these neonatal alterations do not persist at early school age when spontaneous postnatal catch-up growth occurs after birth. PMID:24068993
NASA Astrophysics Data System (ADS)
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-01-01
Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333
Vilaró, Francisca; Canela-Xandri, Anna; Canela, Ramon
2006-09-01
A specific, sensitive, precise, and accurate method for the determination of abscisic acid (ABA) in grapevine leaf tissues is described. The method employs high-performance liquid chromatography and electrospray ionization-mass spectrometry (LC-ESI-MS) in selected ion monitoring mode (SIM) to analyze ABA using a stable isotope-labeled ABA as an internal standard. Absolute recoveries ranged from 72% to 79% using methanol/water pH 5.5 (50:50 v/v) as an extraction solvent. The best efficiency was obtained when the chromatographic separation was carried out by using a porous graphitic carbon (PGC) column. The statistical evaluation of the method was satisfactory in the work range. A relative standard deviation (RDS) of < 5.5% and < 6.0% was obtained for intra-batch and inter-batch comparisons, respectively. As for accuracy, the relative error (%Er) was between -2.7 and 4.3%, and the relative recovery ranged from 95% to 107%.
Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094
NASA Technical Reports Server (NTRS)
Minter, R. T. (Principal Investigator)
1972-01-01
The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
Eberhart, Leopold; Geldner, Götz; Huljic, Susanne; Marggraf, Kerstin; Keller, Thomas; Koch, Tilo; Kranke, Peter
2018-06-01
To compare the effectiveness of 20:1 cafedrine/theodrenaline approved for use in Germany to ephedrine in the restoration of arterial blood pressure and on post-operative outcomes in patients with intra-operative arterial hypotension of any origin under standard clinical practice conditions. 'HYPOTENS' is a national, multi-center, prospective, open-label, two-armed, non-interventional study. Effectiveness and post-operative outcome following cafedrine/theodrenaline or ephedrine therapy will be evaluated in two cohorts of hypotensive patients. Cohort A includes patients aged ≥50 years with ASA-classification 2-4 undergoing non-emergency surgical procedures under general anesthesia. Cohort B comprises patients undergoing Cesarean section under spinal anesthesia. Participating surgical departments will be assigned to a treatment arm by routinely used anti-hypotensive agent. To minimize bias, matched department pairs will be compared in a stratified selection process. The composite primary end-point is the lower absolute deviation from individually determined target blood pressure (IDTBP) and the incidence of heart rate ≥100 beats/min in the first 15 min. Secondary end-points include incidence and degree of early post-operative delirium (cohort A), severity of fetal acidosis in the newborn (cohort B), upper absolute deviation from IDTBP, percentage increase in systolic blood pressure, and time to IDTBP. This open-label, non-interventional study design mirrors daily practice in the treatment of patients with intra-operative hypotension and ensures full treatment decision autonomy with respect to each patient's individual condition. Selection of participating sites by a randomization process addresses bias without interfering with the non-interventional nature of the study. First results are expected in 2018. ClinicalTrials.gov identifier: NCT02893241; DRKS identifier: DRKS00010740.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakkar, Ajit J., E-mail: ajit@unb.ca; Wu, Taozhe
2015-10-14
Static electronic dipole polarizabilities for 135 molecules are calculated using second-order Møller-Plesset perturbation theory and six density functionals recently recommended for polarizabilities. Comparison is made with the best gas-phase experimental data. The lowest mean absolute percent deviations from the best experimental values for all 135 molecules are 3.03% and 3.08% for the LC-τHCTH and M11 functionals, respectively. Excluding the eight extreme outliers for which the experimental values are almost certainly in error, the mean absolute percent deviation for the remaining 127 molecules drops to 2.42% and 2.48% for the LC-τHCTH and M11 functionals, respectively. Detailed comparison enables us to identifymore » 32 molecules for which the discrepancy between the calculated and experimental values warrants further investigation.« less
A comparison of portfolio selection models via application on ISE 100 index data
NASA Astrophysics Data System (ADS)
Altun, Emrah; Tatlidil, Hüseyin
2013-10-01
Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.
NASA Astrophysics Data System (ADS)
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
SU-F-T-472: Validation of Absolute Dose Measurements for MR-IGRT With and Without Magnetic Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, O; Li, H; Goddu, S
Purpose: To validate absolute dose measurements for a MR-IGRT system without presence of the magnetic field. Methods: The standard method (AAPM’s TG-51) of absolute dose measurement with ionization chambers was tested with and without the presence of the magnetic field for a clinical 0.32-T Co-60 MR-IGRT system. Two ionization chambers were used - the Standard Imaging (Madison, WI) A18 (0.123 cc) and the PTW (Freiburg, Germany). A previously reported Monte Carlo simulation suggested a difference on the order of 0.5% for dose measured with and without the presence of the magnetic field, but testing this was not possible until anmore » engineering solution to allow the radiation system to be used without the nominal magnetic field was found. A previously identified effect of orientation in the magnetic field was also tested by placing the chamber either parallel or perpendicular to the field and irradiating from two opposing angles (90 and 270). Finally, the Imaging and Radiation Oncology Core provided OSLD detectors for five irradiations each with and without the field - with two heads at both 0 and 90 degrees, and one head at 90 degrees only as it doesn’t reach 0 (IEC convention). Results: For the TG-51 comparison, expected dose was obtained by decaying values measured at the time of source installation. The average measured difference was 0.4%±0.12% for A18 and 0.06%±0.15% for Farmer chamber. There was minimal (0.3%) orientation dependence without the magnetic field for the A18 chamber, while previous measurements with the magnetic field had a deviation of 3.2% with chamber perpendicular to magnetic field. Results reported by IROC for the OSLDs with and without the field had a maximum difference of 2%. Conclusion: Accurate absolute dosimetry was verified by measurement under the same conditions with and without the magnetic field for both ionization chambers and independently-verifiable OSLDs.« less
Oates, R P; Mcmanus, Michelle; Subbiah, Seenivasan; Klein, David M; Kobelski, Robert
2017-07-14
Internal standards are essential in electrospray ionization liquid chromatography-mass spectrometry (ESI-LC-MS) to correct for systematic error associated with ionization suppression and/or enhancement. A wide array of instrument setups and interfaces has created difficulty in comparing the quantitation of absolute analyte response across laboratories. This communication demonstrates the use of primary standards as operational qualification standards for LC-MS instruments and their comparison with commonly accepted internal standards. In monitoring the performance of internal standards for perfluorinated compounds, potassium hydrogen phthalate (KHP) presented lower inter-day variability in instrument response than a commonly accepted deuterated perfluorinated internal standard (d3-PFOS), with percent relative standard deviations less than or equal to 6%. The inter-day precision of KHP was greater than d3-PFOS over a 28-day monitoring of perfluorooctanesulfonic acid (PFOS), across concentrations ranging from 0 to 100μg/L. The primary standard trometamol (Trizma) performed as well as known internal standards simeton and tris (2-chloroisopropyl) phosphate (TCPP), with intra-day precision of Trizma response as low as 7% RSD on day 28. The inter-day precision of Trizma response was found to be greater than simeton and TCPP, across concentrations of neonicotinoids ranging from 1 to 100μg/L. This study explores the potential of primary standards to be incorporated into LC-MS/MS methodology to improve the quantitative accuracy in environmental contaminant analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Ye, Hongping; Hill, John; Kauffman, John; Han, Xianlin
2010-05-01
The capability of iTRAQ (isotope tags for relative and absolute quantification) reagents coupled with matrix-assisted laser desorption/ionization tandem time-of-flight mass spectrometry (MALDI-TOF/TOF-MS) as a qualitative and quantitative technique for the analysis of complicated protein pharmaceutical mixtures was evaluated. Mixtures of Somavert and Miacalcin with a small amount of bovine serum albumin (BSA) as an impurity were analyzed. Both Somavert and Miacalcin were qualitatively identified, and BSA was detected at levels as low as 0.8mol%. Genotropin and Somavert were compared in a single experiment, and all of the distinct amino acid residues from the two proteins were readily identified. Four somatropin drug products (Genotropin, Norditropin, Jintropin, and Omnitrope) were compared using the iTRAQ/MALDI-MS method to determine the similarity between their primary structures and quantify the amount of protein in each product. All four product samples were well labeled and successfully compared when a filtration cleanup step preceded iTRAQ labeling. The quantitative accuracy of the iTRAQ method was evaluated. In all cases, the accuracy of experimentally determined protein ratios was higher than 90%, and the relative standard deviation (RSD) was less than 10%. The iTRAQ and global internal standard technology (GIST) methods were compared, and the iTRAQ method provided both higher sequence coverage and enhanced signal intensity. Published by Elsevier Inc.
Hintelmann, Holger; Lu, ShengYong
2003-06-01
Variations in Hg isotope ratios in cinnabar ores obtained from different countries were detected by high precision isotope ratio measurements using multi-collector inductively coupled mass spectrometry (MC-ICP-MS). Values of delta198/202Hg varied from 0.0-1.3 percent per thousand relative to a NIST SRM 1641d Hg solution. The typical external uncertainty of the delta values was 0.06 to 0.26 percent per thousand. Hg was introduced into the plasma as elemental Hg after reduction by sodium borohydride. A significant fractionation of lead isotopes was observed during the simultaneous generation of lead hydride, preventing normalization of the Hg isotope ratios using the measured 208/206Pb ratio. Hg ratios were instead corrected employing the simultaneously measured 205/203T1 ratio. Using a 10 ng ml(-1) Hg solution and 10 min of sampling, introducing 60 ng of Hg, the internal precision of the isotope ratio measurements was as low as 14 ppm. Absolute Hg ratios deviated from the representative IUPAC values by approximately 0.2% per u. This observation is explained by the inadequacy of the exponential law to correct for mass bias in MC-ICP-MS measurements. In the absence of a precisely characterized Hg isotope ratio standard, we were not able to determine unambiguously the absolute Hg ratios of the ore samples, highlighting the urgent need for certified standard materials.
Santana, Juan A.; Krogel, Jaron T.; Kent, Paul R. C.; ...
2016-05-03
We have applied the diffusion quantum Monte Carlo (DMC) method to calculate the cohesive energy and the structural parameters of the binary oxides CaO, SrO, BaO, Sc 2O 3, Y 2O 3 and La 2O 3. The aim of our calculations is to systematically quantify the accuracy of the DMC method to study this type of metal oxides. The DMC results were compared with local and semi-local Density Functional Theory (DFT) approximations as well as with experimental measurements. The DMC method yields cohesive energies for these oxides with a mean absolute deviation from experimental measurements of 0.18(2) eV, while withmore » local and semi-local DFT approximations the deviation is 3.06 and 0.94 eV, respectively. For lattice constants, the mean absolute deviation in DMC, local and semi-local DFT approximations, are 0.017(1), 0.07 and 0.05 , respectively. In conclusion, DMC is highly accurate method, outperforming the local and semi-local DFT approximations in describing the cohesive energies and structural parameters of these binary oxides.« less
A determination of the absolute radiant energy of a Robertson-Berger meter sunburn unit
NASA Astrophysics Data System (ADS)
DeLuisi, John J.; Harris, Joyce M.
Data from a Robertson-Berger (RB) sunburn meter were compared with concurrent measurements obtained with an ultraviolet double monochromator (DM), and the absolute energy of one sunburn unit measured by the RB-meter was determined. It was found that at a solar zenith angle of 30° one sunburn unit (SU) is equivalent to 35 ± 4 mJ cm -2, and at a solar zenith angle of 69°, one SU is equivalent to 20 ± 2 mJ cm -2 (relative to a wavelength of 297 nm), where the rate of change is non-linear. The deviation is due to the different response functions of the RB-meter and the DM system used to simulate the response of human skin to the incident u.v. solar spectrum. The average growth rate of the deviation with increasing solar zenith angle was found to be 1.2% per degree between solar zenith angles 30 and 50° and 2.3% per degree between solar zenith angles 50 and 70°. The deviations of response with solar zenith angle were found to be consistent with reported RB-meter characteristics.
Pullman, Rebecca E; Roepke, Stephanie E; Duffy, Jeanne F
2012-06-01
To determine whether an accurate circadian phase assessment could be obtained from saliva samples collected by patients in their home. Twenty-four individuals with a complaint of sleep initiation or sleep maintenance difficulty were studied for two evenings. Each participant received instructions for collecting eight hourly saliva samples in dim light at home. On the following evening they spent 9h in a laboratory room with controlled dim (<20 lux) light where hourly saliva samples were collected. Circadian phase of dim light melatonin onset (DLMO) was determined using both an absolute threshold (3 pg ml(-1)) and a relative threshold (two standard deviations above the mean of three baseline values). Neither threshold method worked well for one participant who was a "low-secretor". In four cases the participants' in-lab melatonin levels rose much earlier or were much higher than their at-home levels, and one participant appeared to take the at home samples out of order. Overall, the at-home and in-lab DLMO values were significantly correlated using both methods, and differed on average by 37 (± 19)min using the absolute threshold and by 54 (± 36)min using the relative threshold. The at-home assessment procedure was able to determine an accurate DLMO using an absolute threshold in 62.5% of the participants. Thus, an at-home procedure for assessing circadian phase could be practical for evaluating patients for circadian rhythm sleep disorders. Copyright © 2012 Elsevier B.V. All rights reserved.
A vibration-insensitive optical cavity and absolute determination of its ultrahigh stability.
Zhao, Y N; Zhang, J; Stejskal, A; Liu, T; Elman, V; Lu, Z H; Wang, L J
2009-05-25
We use the three-cornered-hat method to evaluate the absolute frequency stabilities of three different ultrastable reference cavities, one of which has a vibration-insensitive design that does not even require vibration isolation. An Nd:YAG laser and a diode laser are implemented as light sources. We observe approximately 1 Hz beat note linewidths between all three cavities. The measurement demonstrates that the vibration-insensitive cavity has a good frequency stability over the entire measurement time from 100 ms to 200 s. An absolute, correlation-removed Allan deviation of 1.4 x 10(-15) at s of this cavity is obtained, giving a frequency uncertainty of only 0.44 Hz.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 1: January
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of January. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Mean density standard deviation (all for 13 levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Comparing Standard Deviation Effects across Contexts
ERIC Educational Resources Information Center
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
The Primordial Inflation Explorer (PIXIE)
NASA Technical Reports Server (NTRS)
Kogut, Alan; Chluba, Jens; Fixsen, Dale J.; Meyer, Stephan; Spergel, David
2016-01-01
The Primordial Inflation Explorer is an Explorer-class mission to open new windows on the early universe through measurements of the polarization and absolute frequency spectrum of the cosmic microwave background. PIXIE will measure the gravitational-wave signature of primordial inflation through its distinctive imprint in linear polarization, and characterize the thermal history of the universe through precision measurements of distortions in the blackbody spectrum. PIXIE uses an innovative optical design to achieve background-limited sensitivity in 400 spectral channels spanning over 7 octaves in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded non-imaging optics feed a polarizing Fourier Transform Spectrometer to produce a set of interference fringes, proportional to the difference spectrum between orthogonal linear polarizations from the two input beams. Multiple levels of symmetry and signal modulation combine to reduce systematic errors to negligible levels. PIXIE will map the full sky in Stokes I, Q, and U parameters with angular resolution 2.6 degrees and sensitivity 70 nK per 1degree square pixel. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10(exp. -3) at 5 standard deviations. The PIXIE mission complements anticipated ground-based polarization measurements such as CMBS4, providing a cosmic-variance-limited determination of the large-scale E-mode signal to measure the optical depth, constrain models of reionization, and provide a firm detection of the neutrino mass (the last unknown parameter in the Standard Model of particle physics). In addition, PIXIE will measure the absolute frequency spectrum to characterize deviations from a blackbody with sensitivity 3 orders of magnitude beyond the seminal COBE/FIRAS limits. The sky cannot be black at this level; the expected results will constrain physical processes ranging from inflation to the nature of the first stars and the physical conditions within the interstellar medium of the Galaxy. We describe the PIXIE instrument and mission architecture required to measure the CMB to the limits imposed by astrophysical foregrounds.
NASA Astrophysics Data System (ADS)
Yamamoto, Y.; Hoshi, H.
2005-12-01
Correct determination of absolute paleointensities is essential to investigate past geomagnetic field. There are two types of methods to obtain the paleointensities: the Thellier-type and Shaw-type methods. Many paleomagnetists have so far regarded the former method as reliable. However, there are increasing evidences that it is sometimes not robust for basaltic lavas resulting in systematic high paleointensities (e.g. Calvo et al., 2002; Yamamoto et al., 2003). Alternatively, the double heating technique of the Shaw method combined with low temperature demagnetization (LTD-DHT Shaw method; Tsunakawa et al., 1997; Yamamoto et al., 2003), a lately developed paleointensity technique in Japan, can yield reliable answers even from such basaltic samples (e.g. Yamamoto et al., 2003; Mochizuki et al., 2004; Oishi et al., 2005). In the Japanese archipelago, there are not only basaltic lavas but also andesitic lavas. They are important candidates of the absolute paleointensity determination in Japan. For a case study, we sampled oriented paleomagnetic cores from three sites of the Sakurajima 1914 (TS01 and TS02) and 1946 (SW01) lavas in Japan. Several rock magnetic experiments revealed that main magnetic carriers of the present samples are titanomagnetites with Curie temperatures of about 300-550 C, and that high temperature oxidation progresses in the order of SW01, TS01 and TS02. The LTD-DHT Shaw and Coe-Thellier experiments were conducted on 72 and 63 specimens, respectively. They gave 64 and 60 successful determinations. If the results are normalized by expected field intensities calculated from IGRF-9 (Macmillan et al., 2003) and grouped into LTD-DHT Shaw and Thellier datasets, their averages and standard deviations (1 sigma) resulted in 0.98+/-0.11 (LTD-DHT Shaw) and 1.13+/-0.13 (Thellier). Considering the standard deviations, we can say that both paleointensity methods recovered correct geomagnetic field. However, it is apparent that the LTD-DHT Shaw method has higher reliability than the Thellier method.
Extracting accurate and precise topography from LROC narrow angle camera stereo observations
NASA Astrophysics Data System (ADS)
Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.
2017-02-01
The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur across all of the terrestrial planets.
Multicentre dose audit for clinical trials of radiation therapy in Asia
Fukuda, Shigekazu; Fukumura, Akifumi; Nakamura, Yuzuru-Kutsutani; Jianping, Cao; Cho, Chul-Koo; Supriana, Nana; Dung, To Anh; Calaguas, Miriam Joy; Devi, C.R. Beena; Chansilpa, Yaowalak; Banu, Parvin Akhter; Riaz, Masooma; Esentayeva, Surya; Kato, Shingo; Karasawa, Kumiko; Tsujii, Hirohiko
2017-01-01
Abstract A dose audit of 16 facilities in 11 countries has been performed within the framework of the Forum for Nuclear Cooperation in Asia (FNCA) quality assurance program. The quality of radiation dosimetry varies because of the large variation in radiation therapy among the participating countries. One of the most important aspects of international multicentre clinical trials is uniformity of absolute dose between centres. The National Institute of Radiological Sciences (NIRS) in Japan has conducted a dose audit of participating countries since 2006 by using radiophotoluminescent glass dosimeters (RGDs). RGDs have been successfully applied to a domestic postal dose audit in Japan. The authors used the same audit system to perform a dose audit of the FNCA countries. The average and standard deviation of the relative deviation between the measured and intended dose among 46 beams was 0.4% and 1.5% (k = 1), respectively. This is an excellent level of uniformity for the multicountry data. However, of the 46 beams measured, a single beam exceeded the permitted tolerance level of ±5%. We investigated the cause for this and solved the problem. This event highlights the importance of external audits in radiation therapy. PMID:27864507
NASA Astrophysics Data System (ADS)
Li, Qimeng; Li, Shichun; Hu, Xianglong; Zhao, Jing; Xin, Wenhui; Song, Yuehui; Hua, Dengxin
2018-01-01
The absolute measurement technique for atmospheric temperature can avoid the calibration process and improve the measurement accuracy. To achieve the rotational Raman temperature lidar of absolute measurement, the two-stage parallel multi-channel spectroscopic filter combined a first-order blazed grating with a fiber Bragg grating is designed and its performance is tested. The parameters and the optical path structure of the core cascaded-device (micron-level fiber array) are optimized, the optical path of the primary spectroscope is simulated and the maximum centrifugal distortion of the rotational Raman spectrum is approximately 0.0031 nm, the centrifugal ratio of 0.69%. The experimental results show that the channel coefficients of the primary spectroscope are 0.67, 0.91, 0.67, 0.75, 0.82, 0.63, 0.87, 0.97, 0.89, 0.87 and 1 by using the twelfth channel as a reference and the average FWHM is about 0.44 nm. The maximum deviation between the experimental wavelength and the theoretical value is approximately 0.0398 nm, with the deviation degree of 8.86%. The effective suppression to elastic scattering signal are 30.6, 35.2, 37.1, 38.4, 36.8, 38.2, 41.0, 44.3, 44.0, 46.7 dB. That means, combined with the second spectroscope, the suppression at least is up to 65 dB. Therefore we can fine extract single rotational Raman line to achieve the absolute measurement technique.
Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A
1980-12-01
1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.
Confronting Passive and Active Sensors with Non-Gaussian Statistics
Rodríguez-Gonzálvez, Pablo.; Garcia-Gago, Jesús.; Gomez-Lahoz, Javier.; González-Aguilera, Diego.
2014-01-01
This paper has two motivations: firstly, to compare the Digital Surface Models (DSM) derived by passive (digital camera) and by active (terrestrial laser scanner) remote sensing systems when applied to specific architectural objects, and secondly, to test how well the Gaussian classic statistics, with its Least Squares principle, adapts to data sets where asymmetrical gross errors may appear and whether this approach should be changed for a non-parametric one. The field of geomatic technology automation is immersed in a high demanding competition in which any innovation by one of the contenders immediately challenges the opponents to propose a better improvement. Nowadays, we seem to be witnessing an improvement of terrestrial photogrammetry and its integration with computer vision to overcome the performance limitations of laser scanning methods. Through this contribution some of the issues of this “technological race” are examined from the point of view of photogrammetry. A new software is introduced and an experimental test is designed, performed and assessed to try to cast some light on this thrilling match. For the case considered in this study, the results show good agreement between both sensors, despite considerable asymmetry. This asymmetry suggests that the standard Normal parameters are not adequate to assess this type of data, especially when accuracy is of importance. In this case, standard deviation fails to provide a good estimation of the results, whereas the results obtained for the Median Absolute Deviation and for the Biweight Midvariance are more appropriate measures. PMID:25196104
A meta-analysis of the validity of FFQ targeted to adolescents.
Tabacchi, Garden; Filippi, Anna Rita; Amodio, Emanuele; Jemni, Monèm; Bianco, Antonino; Firenze, Alberto; Mammina, Caterina
2016-05-01
The present work is aimed at meta-analysing validity studies of FFQ for adolescents, to investigate their overall accuracy and variables that can affect it negatively. A meta-analysis of sixteen original articles was performed within the ASSO Project (Adolescents and Surveillance System in the Obesity prevention). The articles assessed the validity of FFQ for adolescents, compared with food records or 24 h recalls, with regard to energy and nutrient intakes. Pearson's or Spearman's correlation coefficients, means/standard deviations, kappa agreement, percentiles and mean differences/limits of agreement (Bland-Altman method) were extracted. Pooled estimates were calculated and heterogeneity tested for correlation coefficients and means/standard deviations. A subgroup analysis assessed variables influencing FFQ accuracy. An overall fair/high correlation between FFQ and reference method was found; a good agreement, measured through the intake mean comparison for all nutrients except sugar, carotene and K, was observed. Kappa values showed fair/moderate agreement; an overall good ability to rank adolescents according to energy and nutrient intakes was evidenced by data of percentiles; absolute validity was not confirmed by mean differences/limits of agreement. Interviewer administration mode, consumption interval of the previous year/6 months and high number of food items are major contributors to heterogeneity and thus can reduce FFQ accuracy. The meta-analysis shows that FFQ are accurate tools for collecting data and could be used for ranking adolescents in terms of energy and nutrient intakes. It suggests how the design and the validation of a new FFQ should be addressed.
ESTIMATION OF EFFECTIVE SHEAR STRESS WORKING ON FLAT SHEET MEMBRANE USING FLUIDIZED MEDIA IN MBRs
NASA Astrophysics Data System (ADS)
Zaw, Hlwan Moe; Li, Tairi; Nagaoka, Hiroshi; Mishima, Iori
This study was aimed at estimating effective shear stress working on flat sheet membrane by the addition of fluidized media in MBRs. In both of laboratory-scale aeration tanks with and without fluidized media, shear stress variations on membrane surface and water phase velocity variations were measured and MBR operation was conducted. For the evaluation of the effective shear stress working on membrane surface to mitigate membrane surface, simulation of trans-membrane pressure increase was conducted. It was shown that the time-averaged absolute value of shear stress was smaller in the reactor with fluidized media than without fluidized media. However, due to strong turbulence in the reactor with fluidized media caused by interaction between water-phase and media and also due to the direct interaction between membrane surface and fluidized media, standard deviation of shear stress on membrane surface was larger in the reactor with fluidized media than without media. Histograms of shear stress variation data were fitted well to normal distribution curves and mean plus three times of standard deviation was defined to be a maximum shear stress value. By applying the defined maximum shear stress to a membrane fouling model, trans-membrane pressure curve in the MBR experiment was simulated well by the fouling model indicting that the maximum shear stress, not time-averaged shear stress, can be regarded as an effective shear stress to prevent membrane fouling in submerged flat-sheet MBRs.
Singh, R P; Sabarinath, S; Gautam, N; Gupta, R C; Singh, S K
2009-07-15
The present manuscript describes development and validation of LC-MS/MS assay for the simultaneous quantitation of 97/78 and its active in-vivo metabolite 97/63 in monkey plasma using alpha-arteether as internal standard (IS). The method involves a single step protein precipitation using acetonitrile as extraction method. The analytes were separated on a Columbus C(18) (50 mm x 2 mm i.d., 5 microm particle size) column by isocratic elution with acetonitrile:ammonium acetate buffer (pH 4, 10 mM) (80:20 v/v) at a flow rate of 0.45 mL/min, and analyzed by mass spectrometry in multiple reaction-monitoring (MRM) positive ion mode. The chromatographic run time was 4.0 min and the weighted (1/x(2)) calibration curves were linear over a range of 1.56-200 ng/mL. The method was linear for both the analytes with correlation coefficients >0.995. The intra-day and inter-day accuracy (% bias) and precisions (% RSD) of the assay were less than 6.27%. Both analytes were stable after three freeze-thaw cycles (% deviation <8.2) and also for 30 days in plasma (% deviation <6.7). The absolute recoveries of 97/78, 97/63 and internal standard (IS), from spiked plasma samples were >90%. The validated assay method, described here, was successfully applied to the pharmacokinetic study of 97/78 and its active in-vivo metabolite 97/63 in Rhesus monkeys.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 7: July
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of July. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 10: October
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of October. Included are global analyses of: (1) Mean temperature/standard deviation; (2) Mean geopotential height/standard deviation; (3) Mean density/standard deviation; (4) Height and vector standard deviation (all at 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point/standard deviation at levels 1000 through 30 mb; and (6) Jet stream at levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 3: March
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-11-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analysis produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of March. Included are global analyses of: (1) Mean Temperature Standard Deviation; (2) Mean Geopotential Height Standard Deviation; (3) Mean Density Standard Deviation; (4) Height and Vector Standard Deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean Dew Point Standard Deviation for levels 1000 through 30 mb; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 2: February
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-09-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of February. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
Joint US Navy/US Air Force climatic study of the upper atmosphere. Volume 4: April
NASA Astrophysics Data System (ADS)
Changery, Michael J.; Williams, Claude N.; Dickenson, Michael L.; Wallace, Brian L.
1989-07-01
The upper atmosphere was studied based on 1980 to 1985 twice daily gridded analyses produced by the European Centre for Medium Range Weather Forecasts. This volume is for the month of April. Included are global analyses of: (1) Mean temperature standard deviation; (2) Mean geopotential height standard deviation; (3) Mean density standard deviation; (4) Height and vector standard deviation (all for 13 pressure levels - 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30 mb); (5) Mean dew point standard deviation for the 13 levels; and (6) Jet stream for levels 500 through 30 mb. Also included are global 5 degree grid point wind roses for the 13 pressure levels.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Preconditioning of Interplanetary Space Due to Transient CME Disturbances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temmer, M.; Reiss, M. A.; Hofmeister, S. J.
Interplanetary space is characteristically structured mainly by high-speed solar wind streams emanating from coronal holes and transient disturbances such as coronal mass ejections (CMEs). While high-speed solar wind streams pose a continuous outflow, CMEs abruptly disrupt the rather steady structure, causing large deviations from the quiet solar wind conditions. For the first time, we give a quantification of the duration of disturbed conditions (preconditioning) for interplanetary space caused by CMEs. To this aim, we investigate the plasma speed component of the solar wind and the impact of in situ detected interplanetary CMEs (ICMEs), compared to different background solar wind modelsmore » (ESWF, WSA, persistence model) for the time range 2011–2015. We quantify in terms of standard error measures the deviations between modeled background solar wind speed and observed solar wind speed. Using the mean absolute error, we obtain an average deviation for quiet solar activity within a range of 75.1–83.1 km s{sup −1}. Compared to this baseline level, periods within the ICME interval showed an increase of 18%–32% above the expected background, and the period of two days after the ICME displayed an increase of 9%–24%. We obtain a total duration of enhanced deviations over about three and up to six days after the ICME start, which is much longer than the average duration of an ICME disturbance itself (∼1.3 days), concluding that interplanetary space needs ∼2–5 days to recover from the impact of ICMEs. The obtained results have strong implications for studying CME propagation behavior and also for space weather forecasting.« less
Pinilla, Jaime; López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-05-09
Newcomb-Benford's Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson's χ 2 , mean absolute deviation and Kuiper tests. Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ 2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ 2 test). Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Photodisintegration cross section of the reaction 4He(γ,n)3He at the giant dipole resonance peak
NASA Astrophysics Data System (ADS)
Tornow, W.; Kelley, J. H.; Raut, R.; Rusev, G.; Tonchev, A. P.; Ahmed, M. W.; Crowell, A. S.; Stave, S. C.
2012-06-01
The photodisintegration cross section of 4He into a neutron and helion was measured at incident photon energies of 27.0, 27.5, and 28.0 MeV. A high-pressure 4He-Xe gas scintillator served as target and detector while a pure Xe gas scintillator was used for background measurements. A NaI detector in combination with the standard HIγS scintillator paddle system was employed for absolute photon-flux determination. Our data are in good agreement with the theoretical prediction of the Trento group and the recent data of Nilsson [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.75.014007 75, 014007 (2007)] but deviate considerably from the high-precision data of Shima [Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.72.044004 72, 044004 (2005)].
The integration of FPGA TDC inside White Rabbit node
NASA Astrophysics Data System (ADS)
Li, H.; Xue, T.; Gong, G.; Li, J.
2017-04-01
White Rabbit technology is capable of delivering sub-nanosecond accuracy and picosecond precision of synchronization and normal data packets over the fiber network. Carry chain structure in FPGA is a popular way to build TDC and tens of picosecond RMS resolution has been achieved. The integration of WR technology with FPGA TDC can enhance and simplify the TDC in many aspects that includes providing a low jitter clock for TDC, a synchronized absolute UTC/TAI timestamp for coarse counter, a fancy way to calibrate the carry chain DNL and an easy to use Ethernet link for data and control information transmit. This paper presents a FPGA TDC implemented inside a normal White Rabbit node with sub-nanosecond measurement precision. The measured standard deviation reaches 50ps between two distributed TDCs. Possible applications of this distributed TDC are also discussed.
Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli
Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy
2016-01-01
We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Wang, Guochao; Tan, Lilong; Yan, Shuhua
2018-02-07
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Tan, Lilong; Yan, Shuhua
2018-01-01
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897
Zhou, Jinhui; Xue, Xiaofeng; Li, Yi; Zhang, Jinzhen; Zhao, Jing
2007-01-01
An optimized reversed-phase high-performance liquid chromatography method was developed to detect the trans-10-hydroxy-2-decenoic acid (10-HDA) content in royal jelly cream and lyophilized powder. The sample was extracted using absolute ethanol. Chromatographic separation of 10-HDA and methyl 4-hydroxybenzoate as the internal standard was performed on a Nova-pak C18 column. The average recoveries were 95.0-99.2% (n = 5) with relative standard deviation (RSD) values of 1.3-2.1% for royal jelly cream and 98.0-100.0% (n = 5) with RSD values of 1.6-3.0% for lyophilized powder, respectively. The limits of detection and quantitation were 0.5 and 1.5 mg/kg, respectively, for both royal jelly cream and lyophilized powder. The method was validated for the determination of practical royal jelly products. The concentration of 10-HDA ranged from 1.26 to 2.21% for pure royal jelly cream samples and 3.01 to 6.19% for royal jelly lyophilized powder samples. For 30 royal jelly products, the 10-HDA content varied from not detectable to 0.98%.
NASA Astrophysics Data System (ADS)
Qie, L.; Li, Z.; Li, L.; Li, K.; Li, D.; Xu, H.
2018-04-01
The Devaux-Vermeulen-Li method (DVL method) is a simple approach to retrieve aerosol optical parameters from the Sun-sky radiance measurements. This study inherited the previous works of retrieving aerosol single scattering albedo (SSA) and scattering phase function, the DVL method was modified to derive aerosol asymmetric factor (g). To assess the algorithm performance at various atmospheric aerosol conditions, retrievals from AERONET observations were implemented, and the results are compared with AERONET official products. The comparison shows that both the DVL SSA and g were well correlated with those of AERONET. The RMSD and the absolute value of MBD deviations between the SSAs are 0.025 and 0.015 respectively, well below the AERONET declared SSA uncertainty of 0.03 for all wavelengths. For asymmetry factor g, the RMSD deviations are smaller than 0.02 and the absolute values of MBDs smaller than 0.01 at 675, 870 and 1020 nm bands. Then, considering several factors probably affecting retrieval quality (i.e. the aerosol optical depth (AOD), the solar zenith angle, and the sky residual error, sphericity proportion and Ångström exponent), the deviations for SSA and g of these two algorithms were calculated at varying value intervals. Both the SSA and g deviations were found decrease with the AOD and the solar zenith angle, and increase with sky residual error. However, the deviations do not show clear sensitivity to the sphericity proportion and Ångström exponent. This indicated that the DVL algorithm is available for both large, non-spherical particles and spherical particles. The DVL results are suitable for the evaluation of aerosol direct radiative effects of different aerosol types.
NASA Astrophysics Data System (ADS)
Dhakal, Y. P.; Kunugi, T.; Suzuki, W.; Aoi, S.
2013-12-01
The Mw 9.1 Tohoku-oki earthquake caused strong shakings of super high rise and high rise buildings constructed on deep sedimentary basins in Japan. Many people felt difficulty in moving inside the high rise buildings even on the Osaka basin located at distances as far as 800 km from the epicentral area. Several empirical equations are proposed to estimate the peak ground motions and absolute acceleration response spectra applicable mainly within 300 to 500km from the source area. On the other hand, Japan Meteorological Agency has recently proposed four classes of absolute velocity response spectra as suitable indices to qualitatively describe the intensity of long-period ground motions based on the observed earthquake records, human experiences, and actual damages that occurred in the high rise and super high rise buildings. The empirical prediction equations have been used in disaster mitigation planning as well as earthquake early warning. In this study, we discuss the results of our preliminary analysis on attenuation relation of absolute velocity response spectra calculated from the observed strong motion records including those from the Mw 9.1 Tohoku-oki earthquake using simple regression models with various model parameters. We used earthquakes, having Mw 6.5 or greater, and focal depths shallower than 50km, which occurred in and around Japanese archipelago. We selected those earthquakes for which the good quality records are available over 50 observation sites combined from K-NET and KiK-net. After a visual inspection on approximately 21,000 three component records from 36 earthquakes, we used about 15,000 good quality records in the period range of 1 to 10s within the hypocentral distance (R) of 800km. We performed regression analyses assuming the following five regression models. (1) log10Y (T) = c+ aMw - log10R - bR (2) log10Y (T) = c+ aMw - log10R - bR +gS (3) log10Y (T) = c+ aMw - log10R - bR + hD (4) log10Y (T) = c+ aMw - log10R - bR +gS +hD (5) log10Y (T) = c+ aMw - log10R - bR +∑gS +hD where Y (T) is the 5% damped peak vector response in cm/s derived from two horizontal component records for a natural period T in second; in (2) S is a dummy variable which is one if a site is located inside a sedimentary basin, otherwise zero. In (3), D is depth to the top of layer having a particular S-wave velocity. We used the deep underground S-wave velocity model available from Japan Seismic Hazard Information Station (J-SHIS). In (5), sites are classified to various sedimentary basins. Analyses show that the standard deviations decrease in the order of the models listed and the all coefficients are significant. Interestingly, coefficients g are found to be different from basin to basin at most periods, and the depth to the top of layer having S-wave velocity of 1.7km/s gives the smallest standard deviation of 0.31 at T=4.4s in (5). This study shows the possibility of describing the observed peak absolute velocity response values by using simple model parameters like site location and sedimentary depth soon after the location and magnitude of an earthquake are known.
Fredriksson, Ingemar; Burdakov, Oleg; Larsson, Marcus; Strömberg, Tomas
2013-12-01
The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion are estimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry (LDF). The DRS spectra (450 to 850 nm) are assessed at two source-detector separations (0.4 and 1.2 mm), allowing for a relative calibration routine, whereas LDF spectra are assessed at 1.2 mm in the same fiber-optic probe. Data are analyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayered tissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information. Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluate the method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1 percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and 23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations are presented. The method offers several advantages such as simultaneous quantification of RBC tissue fraction and oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speed resolved, absolute (% RBC×mm/s), and more accurate due to the combination with DRS.
NASA Astrophysics Data System (ADS)
Fredriksson, Ingemar; Burdakov, Oleg; Larsson, Marcus; Strömberg, Tomas
2013-12-01
The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion are estimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry (LDF). The DRS spectra (450 to 850 nm) are assessed at two source-detector separations (0.4 and 1.2 mm), allowing for a relative calibration routine, whereas LDF spectra are assessed at 1.2 mm in the same fiber-optic probe. Data are analyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayered tissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information. Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluate the method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1 percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and 23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations are presented. The method offers several advantages such as simultaneous quantification of RBC tissue fraction and oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speed resolved, absolute (% RBC×mm/s), and more accurate due to the combination with DRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stow, Sarah M.; Causon, Tim J.; Zheng, Xueyun
Collision cross section (CCS) measurements resulting from ion mobility-mass spectrometry (IM-MS) experiments provide a promising orthogonal dimension of structural information in MS-based analytical separations. As with any molecular identifier, interlaboratory standardization must precede broad range integration into analytical workflows. In this study, we present a reference drift tube ion mobility mass spectrometer (DTIM-MS) where improvements on the measurement accuracy of experimental parameters influencing IM separations provide standardized drift tube, nitrogen CCS values (DTCCSN2) for over 120 unique ion species with the lowest measurement uncertainty to date. The reproducibility of these DTCCSN2 values are evaluated across three additional laboratories on amore » commercially available DTIM-MS instrument. The traditional stepped field CCS method performs with a relative standard deviation (RSD) of 0.29% for all ion species across the three additional laboratories. The calibrated single field CCS method, which is compatible with a wide range of chromatographic inlet systems, performs with an average, absolute bias of 0.54% to the standardized stepped field DTCCSN2 values on the reference system. The low RSD and biases observed in this interlaboratory study illustrate the potential of DTIM-MS for providing a molecular identifier for a broad range of discovery based analyses.« less
Single-breath diffusing capacity for carbon monoxide instrument accuracy across 3 health systems.
Hegewald, Matthew J; Markewitz, Boaz A; Wilson, Emily L; Gallo, Heather M; Jensen, Robert L
2015-03-01
Measuring diffusing capacity of the lung for carbon monoxide (DLCO) is complex and associated with wide intra- and inter-laboratory variability. Increased D(LCO) variability may have important clinical consequences. The objective of the study was to assess instrument performance across hospital pulmonary function testing laboratories using a D(LCO) simulator that produces precise and repeatable D(LCO) values. D(LCO) instruments were tested with CO gas concentrations representing medium and high range D(LCO) values. The absolute difference between observed and target D(LCO) value was used to determine measurement accuracy; accuracy was defined as an average deviation from the target value of < 2.0 mL/min/mm Hg. Accuracy of inspired volume measurement and gas sensors were also determined. Twenty-three instruments were tested across 3 healthcare systems. The mean absolute deviation from the target value was 1.80 mL/min/mm Hg (range 0.24-4.23) with 10 of 23 instruments (43%) being inaccurate. High volume laboratories performed better than low volume laboratories, although the difference was not significant. There was no significant difference among the instruments by manufacturers. Inspired volume was not accurate in 48% of devices; mean absolute deviation from target value was 3.7%. Instrument gas analyzers performed adequately in all instruments. D(LCO) instrument accuracy was unacceptable in 43% of devices. Instrument inaccuracy can be primarily attributed to errors in inspired volume measurement and not gas analyzer performance. D(LCO) instrument performance may be improved by regular testing with a simulator. Caution should be used when comparing D(LCO) results reported from different laboratories. Copyright © 2015 by Daedalus Enterprises.
Xu, Zhoubing; Gertz, Adam L.; Burke, Ryan P.; Bansal, Neil; Kang, Hakmook; Landman, Bennett A.; Abramson, Richard G.
2016-01-01
OBJECTIVES Multi-atlas fusion is a promising approach for computer-assisted segmentation of anatomical structures. The purpose of this study was to evaluate the accuracy and time efficiency of multi-atlas segmentation for estimating spleen volumes on clinically-acquired CT scans. MATERIALS AND METHODS Under IRB approval, we obtained 294 deidentified (HIPAA-compliant) abdominal CT scans on 78 subjects from a recent clinical trial. We compared five pipelines for obtaining splenic volumes: Pipeline 1–manual segmentation of all scans, Pipeline 2–automated segmentation of all scans, Pipeline 3–automated segmentation of all scans with manual segmentation for outliers on a rudimentary visual quality check, Pipelines 4 and 5–volumes derived from a unidimensional measurement of craniocaudal spleen length and three-dimensional splenic index measurements, respectively. Using Pipeline 1 results as ground truth, the accuracy of Pipelines 2–5 (Dice similarity coefficient [DSC], Pearson correlation, R-squared, and percent and absolute deviation of volume from ground truth) were compared for point estimates of splenic volume and for change in splenic volume over time. Time cost was also compared for Pipelines 1–5. RESULTS Pipeline 3 was dominant in terms of both accuracy and time cost. With a Pearson correlation coefficient of 0.99, average absolute volume deviation 23.7 cm3, and 1 minute per scan, Pipeline 3 yielded the best results. The second-best approach was Pipeline 5, with a Pearson correlation coefficient 0.98, absolute deviation 46.92 cm3, and 1 minute 30 seconds per scan. Manual segmentation (Pipeline 1) required 11 minutes per scan. CONCLUSION A computer-automated segmentation approach with manual correction of outliers generated accurate splenic volumes with reasonable time efficiency. PMID:27519156
Are Study and Journal Characteristics Reliable Indicators of "Truth" in Imaging Research?
Frank, Robert A; McInnes, Matthew D F; Levine, Deborah; Kressel, Herbert Y; Jesurum, Julia S; Petrcich, William; McGrath, Trevor A; Bossuyt, Patrick M
2018-04-01
Purpose To evaluate whether journal-level variables (impact factor, cited half-life, and Standards for Reporting of Diagnostic Accuracy Studies [STARD] endorsement) and study-level variables (citation rate, timing of publication, and order of publication) are associated with the distance between primary study results and summary estimates from meta-analyses. Materials and Methods MEDLINE was searched for meta-analyses of imaging diagnostic accuracy studies, published from January 2005 to April 2016. Data on journal-level and primary-study variables were extracted for each meta-analysis. Primary studies were dichotomized by variable as first versus subsequent publication, publication before versus after STARD introduction, STARD endorsement, or by median split. The mean absolute deviation of primary study estimates from the corresponding summary estimates for sensitivity and specificity was compared between groups. Means and confidence intervals were obtained by using bootstrap resampling; P values were calculated by using a t test. Results Ninety-eight meta-analyses summarizing 1458 primary studies met the inclusion criteria. There was substantial variability, but no significant differences, in deviations from the summary estimate between paired groups (P > .0041 in all comparisons). The largest difference found was in mean deviation for sensitivity, which was observed for publication timing, where studies published first on a topic demonstrated a mean deviation that was 2.5 percentage points smaller than subsequently published studies (P = .005). For journal-level factors, the greatest difference found (1.8 percentage points; P = .088) was in mean deviation for sensitivity in journals with impact factors above the median compared with those below the median. Conclusion Journal- and study-level variables considered important when evaluating diagnostic accuracy information to guide clinical decisions are not systematically associated with distance from the truth; critical appraisal of individual articles is recommended. © RSNA, 2017 Online supplemental material is available for this article.
Schwarz, T; Weber, M; Wörner, M; Renkawitz, T; Grifka, J; Craiovan, B
2017-05-01
Accurate assessment of cup orientation on postoperative radiographs is essential for evaluating outcome after THA. However, accuracy is impeded by the deviation of the central X-ray beam in relation to the cup and the impossibility of measuring retroversion on standard pelvic radiographs. In an experimental trial, we built an artificial cup holder enabling the setting of different angles of anatomical anteversion and inclination. Twelve different cup orientations were investigated by three examiners. After comparing the two methods for radiographic measurement of the cup position developed by Lewinnek and Widmer, we showed how to differentiate between anteversion and retroversion in each cup position by using a second plane. To show the effect of the central beam offset on the cup, we X-rayed a defined cup position using a multidirectional central beam offset. According to Murray's definition of anteversion and inclination, we created a novel corrective procedure to balance measurement errors caused by deviation of the central beam. Measurement of the 12 different cup positions with the Lewinnek's method yielded a mean deviation of [Formula: see text] (95 % CI 1.3-2.3) from the original cup anteversion. The respective deviation with the Widmer/Liaw's method was [Formula: see text] (95 % CI 2.4-4.0). In each case, retroversion could be differentiated from anteversion with a second radiograph. Because of the multidirectional central beam offset ([Formula: see text] cm) from the acetabular cup in the cup holder ([Formula: see text] anteversion and [Formula: see text] inclination), the mean absolute difference for anteversion was [Formula: see text] (range [Formula: see text] to [Formula: see text] and [Formula: see text] (range [Formula: see text] to [Formula: see text] for inclination. The application of our novel mathematical correction of the central beam offset reduced deviation to a mean difference of [Formula: see text] for anteversion and [Formula: see text] for inclination. This novel calculation for central beam offset correction enables highly accurate measurement of the cup position.
Genheden, Samuel
2017-10-01
We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.
NASA Astrophysics Data System (ADS)
Genheden, Samuel
2017-10-01
We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.
Schlüns, Danny; Franchini, Mirko; Götz, Andreas W; Neugebauer, Johannes; Jacob, Christoph R; Visscher, Lucas
2017-02-05
We present a new implementation of analytical gradients for subsystem density-functional theory (sDFT) and frozen-density embedding (FDE) into the Amsterdam Density Functional program (ADF). The underlying theory and necessary expressions for the implementation are derived and discussed in detail for various FDE and sDFT setups. The parallel implementation is numerically verified and geometry optimizations with different functional combinations (LDA/TF and PW91/PW91K) are conducted and compared to reference data. Our results confirm that sDFT-LDA/TF yields good equilibrium distances for the systems studied here (mean absolute deviation: 0.09 Å) compared to reference wave-function theory results. However, sDFT-PW91/PW91k quite consistently yields smaller equilibrium distances (mean absolute deviation: 0.23 Å). The flexibility of our new implementation is demonstrated for an HCN-trimer test system, for which several different setups are applied. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Visualizing the Sample Standard Deviation
ERIC Educational Resources Information Center
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
Development and evaluation of a prototype tracking system using the treatment couch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Stephanie, E-mail: stephanie.lang@usz.ch; Riesterer, Oliver; Klöck, Stephan
2014-02-15
Purpose: Tumor motion increases safety margins around the clinical target volume and leads to an increased dose to the surrounding healthy tissue. The authors have developed and evaluated a one-dimensional treatment couch tracking system to counter steer respiratory tumor motion. Three different motion detection sensors with different lag times were evaluated. Methods: The couch tracking system consists of a motion detection sensor, which can be the topometrical system Topos (Cyber Technologies, Germany), the respiratory gating system RPM (Varian Medical Systems) or a laser triangulation system (Micro Epsilon), and the Protura treatment couch (Civco Medical Systems). The control of the treatmentmore » couch was implemented in the block diagram environment Simulink (MathWorks). To achieve real time performance, the Simulink models were executed on a real time engine, provided by Real-Time Windows Target (MathWorks). A proportional-integral control system was implemented. The lag time of the couch tracking system using the three different motion detection sensors was measured. The geometrical accuracy of the system was evaluated by measuring the mean absolute deviation from the reference (static position) during motion tracking. This deviation was compared to the mean absolute deviation without tracking and a reduction factor was defined. A hexapod system was moving according to seven respiration patterns previously acquired with the RPM system as well as according to a sin{sup 6} function with two different frequencies (0.33 and 0.17 Hz) and the treatment table compensated the motion. Results: A prototype system for treatment couch tracking of respiratory motion was developed. The laser based tracking system with a small lag time of 57 ms reduced the residual motion by a factor of 11.9 ± 5.5 (mean value ± standard deviation). An increase in delay time from 57 to 130 ms (RPM based system) resulted in a reduction by a factor of 4.7 ± 2.6. The Topos based tracking system with the largest lag time of 300 ms achieved a mean reduction by a factor of 3.4 ± 2.3. The increase in the penumbra of a profile (1 × 1 cm{sup 2}) for a motion of 6 mm was 1.4 mm. With tracking applied there was no increase in the penumbra. Conclusions: Couch tracking with the Protura treatment couch is achievable. To reliably track all possible respiration patterns without prediction filters a short lag time below 100 ms is needed. More scientific work is necessary to extend our prototype to tracking of internal motion.« less
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Pressures generated in vitro during Stabident intraosseous injections.
Whitworth, J M; Ramlee, R A M; Meechan, J G
2005-05-01
To test the hypothesis that the Stabident intraosseous injection is a potentially high-pressure technique, which carries serious risks of anaesthetic cartridge failure. A standard Astra dental syringe was modified to measure the internal pressure of local anaesthetic cartridges during injection. Intra-cartridge pressures were measured at 1 s intervals during slow (approximately 15 s) and rapid (<10 s) injections of 2% Xylocaine with 1:80,000 adrenaline (0.25 cartridge volumes) into air (no tissue resistance), or into freshly prepared Stabident perforation sites in the anterior mandible of freshly culled young and old sheep (against tissue resistance). Each injection was repeated 10 times over 3 days. Absolute maximum pressures generated by each category of injection, mean pressures at 1 s intervals in each series of injections, and standard deviations were calculated. Curves of mean maximum intra-cartridge pressure development with time were plotted for slow and rapid injections, and one-way anova (P<0.05) conducted to determine significant differences between categories of injection. Pressures created when injecting into air were less than those needed to inject into tissue (P<0.001). Fast injection produced greater intra-cartridge pressures than slow delivery (P<0.05). Injection pressures rose more quickly and to higher levels in small, young sheep mandibles than in larger, old sheep mandibles. The absolute maximum intra-cartridge pressure developed during the study was 3.31 MPa which is less than that needed to fracture glass cartridges. Stabident intraosseous injection conducted in accordance with the manufacturer's instructions does not present a serious risk of dangerous pressure build-up in local anaesthetic cartridges.
NASA Technical Reports Server (NTRS)
Kogut, Alan J.; Fixsen, D. J.; Chuss, D. T.; Dotson, J.; Dwek, E.; Halpern, M.; Hinshaw, G. F.; Meyer, S. M.; Moseley, S. H.; Seiffert, M. D.;
2011-01-01
The Primordial Inflation Explorer (PIXIE) is a concept for an Explorer-class mission to measure the gravity-wave signature of primordial inflation through its distinctive imprint on the linear polarization of the cosmic microwave background. The instrument consists of a polarizing Michelson interferometer configured as a nulling polarimeter to measure the difference spectrum between orthogonal linear polarizations from two co-aligned beams. Either input can view the sky or a temperature-controlled absolute reference blackbody calibrator. Rhe proposed instrument can map the absolute intensity and linear polarization (Stokes I, Q, and U parameters) over the full sky in 400 spectral channels spanning 2.5 decades in frequency from 30 GHz to 6 THz (1 cm to 50 micron wavelength). Multi-moded optics provide background-limited sensitivity using only 4 detectors, while the highly symmetric design and multiple signal modulations provide robust rejection of potential systematic errors. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r < 10..3 at 5 standard deviations. The rich PIXIE data set can also constrain physical processes ranging from Big Bang cosmology to the nature of the first stars to physical conditions within the interstellar medium of the Galaxy.
Elhennawy, Mai Gamal; Lin, Hai-Shu
2017-12-29
Tangeretin (TAN) is a dietary polymethoxylated flavone that possesses a broad scope of pharmacological activities. A simple high-performance liquid chromatography (HPLC) method was developed and validated in this study to quantify TAN in plasma of Sprague-Dawley rats. The lower limit of quantification (LLOQ) was 15 ng/mL; the intra- and inter-day assay variations expressed in the form of relative standard deviation (RSD) were all less than 10%; and the assay accuracy was within 100 ± 15%. Subsequently, pharmacokinetic profiles of TAN were explored and established. Upon single intravenous administration (10 mg/kg), TAN had rapid clearance ( Cl = 94.1 ± 20.2 mL/min/kg) and moderate terminal elimination half-life ( t 1/2 λz = 166 ± 42 min). When TAN was given as a suspension (50 mg/kg), poor but erratic absolute oral bioavailability (mean value < 3.05%) was observed; however, when TAN was given in a solution prepared with randomly methylated-β-cyclodextrin (50 mg/kg), its plasma exposure was at least doubled (mean bioavailability: 6.02%). It was obvious that aqueous solubility hindered the oral absorption of TAN and acted as a barrier to its oral bioavailability. This study will facilitate further investigations on the medicinal potentials of TAN.
Compensating for magnetic field inhomogeneity in multigradient-echo-based MR thermometry.
Simonis, Frank F J; Petersen, Esben T; Bartels, Lambertus W; Lagendijk, Jan J W; van den Berg, Cornelis A T
2015-03-01
MR thermometry (MRT) is a noninvasive method for measuring temperature that can potentially be used for radio frequency (RF) safety monitoring. This application requires measuring absolute temperature. In this study, a multigradient-echo (mGE) MRT sequence was used for that purpose. A drawback of this sequence, however, is that its accuracy is affected by background gradients. In this article, we present a method to minimize this effect and to improve absolute temperature measurements using MRI. By determining background gradients using a B0 map or by combining data acquired with two opposing readout directions, the error can be removed in a homogenous phantom, thus improving temperature maps. All scans were performed on a 3T system using ethylene glycol-filled phantoms. Background gradients were varied, and one phantom was uniformly heated to validate both compensation approaches. Independent temperature recordings were made with optical probes. Errors correlated closely to the background gradients in all experiments. Temperature distributions showed a much smaller standard deviation when the corrections were applied (0.21°C vs. 0.45°C) and correlated well with thermo-optical probes. The corrections offer the possibility to measure RF heating in phantoms more precisely. This allows mGE MRT to become a valuable tool in RF safety assessment. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Nava, D. F.; Mitchell, M. B.; Stief, L. J.
1986-01-01
The absolute rate constant for the reaction H + C4H2 has been measured over the temperature (T) interval 210-423 K, using the technique of flash photolysis-resonance fluorescence. At each of the five temperatures employed, the results were independent of variations in C4H2 concentration, total pressure of Ar or N2, and flash intensity (i.e., the initial H concentration). The rate constant, k, was found to be equal to 1.39 x 10 to the -10th exp (-1184/T) cu cm/s, with an error of one standard deviation. The Arrhenius parameters at the high pressure limit determined here for the H + C4H2 reaction are consistent with those for the corresponding reactions of H with C2H2 and C3H4. Implications of the kinetic carbon chemistry results, particularly those at low temperature, are considered for models of the atmospheric carbon chemistry of Titan. The rate of this reaction, relative to that of the analogous, but slower, reaction of H + C2H2, appears to make H + C4H2 a very feasible reaction pathway for effective conversion of H atoms to molecular hydrogen in the stratosphere of Titan.
Heberle, S A; Aga, D S; Hany, R; Müller, S R
2000-02-15
This paper describes a procedure for simultaneous enrichment, separation, and quantification of acetanilide herbicides and their major ionic oxanilic acid (OXA) and ethanesulfonic acid (ESA) metabolites in groundwater and surface water using Carbopack B as a solid-phase extraction (SPE) material. The analytes adsorbed on Carbopack B were eluted selectively from the solid phase in three fractions containing the parent compounds (PCs), their OXA metabolites, and their ESA metabolites, respectively. The complete separation of the three compound classes allowed the analysis of the neutral PCs (acetochlor, alachlor, and metolachlor) and their methylated OXA metabolites by gas chromatography/mass spectrometry. The ESA compounds were analyzed by high-performance liquid chromatography with UV detection. The use of Carbopack B resulted in good recoveries of the polar metabolites even from large sample volumes (1 L). Absolute recoveries from spiked surface and groundwater samples ranged between 76 and 100% for the PCs, between 41 and 91% for the OXAs, and between 47 and 96% for the ESAs. The maximum standard deviation of the absolute recoveries was 12%. The method detection limits are between 1 and 8 ng/L for the PCs, between 1 and 7 ng/L for the OXAs, and between 10 and 90 ng/L for the ESAs.
An affordable cuff-less blood pressure estimation solution.
Jain, Monika; Kumar, Niranjan; Deb, Sujay
2016-08-01
This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Cooperstein, Robert; Young, Morgan; Lew, Makani
2015-01-01
Objectives: Primary goal: to determine the validity of C1 transverse process (TVP) palpation compared to an imaging reference standard. Methods: Radiopaque markers were affixed to the skin at the putative location of the C1 TVPs in 21 participants receiving APOM radiographs. The radiographic vertical distances from the marker to the C1 TVP, mastoid process, and C2 TVP were evaluated to determine palpatory accuracy. Results: Interexaminer agreement for radiometric analysis was “excellent.” Stringent accuracy (marker placed ±4mm from the most lateral projection of the C1 TVP) = 57.1%; expansive accuracy (marker placed closer to contiguous structures) = 90.5%. Mean Absolute Deviation (MAD) = 4.34 (3.65, 5.03) mm; root-mean-squared error = 5.40mm. Conclusions: Manual palpation of the C1 TVP can be very accurate and likely to direct a manual therapist or other health professional to the intended diagnostic or therapeutic target. This work is relevant to manual therapists, anesthetists, surgeons, and other health professionals. PMID:26136601
Down-Looking Interferometer Study II, Volume I,
1980-03-01
g(standard deviation of AN )(standard deviation of(3) where T’rm is the "reference spectrum", an estimate of the actual spectrum v gv T ’V Cgv . If jpj...spectrum T V . cgv . According to Eq. (2), Z is the standard deviation of the observed contrast spectral radiance AN divided by the effective rms system
40 CFR 61.207 - Radium-226 sampling and measurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... B, Method 114. (3) Calculate the mean, x 1, and the standard deviation, s 1, of the n 1 radium-226... owner or operator of a phosphogypsum stack shall report the mean, standard deviation, 95th percentile..., Method 114. (4) Recalculate the mean and standard deviation of the entire set of n 2 radium-226...
Briehl, Margaret M; Nelson, Mark A; Krupinski, Elizabeth A; Erps, Kristine A; Holcomb, Michael J; Weinstein, John B; Weinstein, Ronald S
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, "Mechanisms of Human Disease." Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master's: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises.
Briehl, Margaret M.; Nelson, Mark A.; Krupinski, Elizabeth A.; Erps, Kristine A.; Holcomb, Michael J.; Weinstein, John B.
2016-01-01
Faculty members from the Department of Pathology at The University of Arizona College of Medicine-Tucson have offered a 4-credit course on enhanced general pathology for graduate students since 1996. The course is titled, “Mechanisms of Human Disease.” Between 1997 and 2016, 270 graduate students completed Mechanisms of Human Disease. The students came from 21 programs of study. Analysis of Variance, using course grade as the dependent and degree, program, gender, and year (1997-2016) as independent variables, indicated that there was no significant difference in final grade (F = 0.112; P = .8856) as a function of degree (doctorate: mean = 89.60, standard deviation = 5.75; master’s: mean = 89.34, standard deviation = 6.00; certificate program: mean = 88.64, standard deviation = 8.25), specific type of degree program (F = 2.066, P = .1316; life sciences: mean = 89.95, standard deviation = 6.40; pharmaceutical sciences: mean = 90.71, standard deviation = 4.57; physical sciences: mean = 87.79, standard deviation = 5.17), or as a function of gender (F = 2.96, P = .0865; males: mean = 88.09, standard deviation = 8.36; females: mean = 89.58, standard deviation = 5.82). Students in the physical and life sciences performed equally well. Mechanisms of Human Disease is a popular course that provides students enrolled in a variety of graduate programs with a medical school-based course on mechanisms of diseases. The addition of 2 new medically oriented Master of Science degree programs has nearly tripled enrollment. This graduate level course also potentially expands the interdisciplinary diversity of participants in our interprofessional education and collaborative practice exercises. PMID:28725783
Standardization approaches in absolute quantitative proteomics with mass spectrometry.
Calderón-Celis, Francisco; Encinar, Jorge Ruiz; Sanz-Medel, Alfredo
2017-07-31
Mass spectrometry-based approaches have enabled important breakthroughs in quantitative proteomics in the last decades. This development is reflected in the better quantitative assessment of protein levels as well as to understand post-translational modifications and protein complexes and networks. Nowadays, the focus of quantitative proteomics shifted from the relative determination of proteins (ie, differential expression between two or more cellular states) to absolute quantity determination, required for a more-thorough characterization of biological models and comprehension of the proteome dynamism, as well as for the search and validation of novel protein biomarkers. However, the physico-chemical environment of the analyte species affects strongly the ionization efficiency in most mass spectrometry (MS) types, which thereby require the use of specially designed standardization approaches to provide absolute quantifications. Most common of such approaches nowadays include (i) the use of stable isotope-labeled peptide standards, isotopologues to the target proteotypic peptides expected after tryptic digestion of the target protein; (ii) use of stable isotope-labeled protein standards to compensate for sample preparation, sample loss, and proteolysis steps; (iii) isobaric reagents, which after fragmentation in the MS/MS analysis provide a final detectable mass shift, can be used to tag both analyte and standard samples; (iv) label-free approaches in which the absolute quantitative data are not obtained through the use of any kind of labeling, but from computational normalization of the raw data and adequate standards; (v) elemental mass spectrometry-based workflows able to provide directly absolute quantification of peptides/proteins that contain an ICP-detectable element. A critical insight from the Analytical Chemistry perspective of the different standardization approaches and their combinations used so far for absolute quantitative MS-based (molecular and elemental) proteomics is provided in this review. © 2017 Wiley Periodicals, Inc.
Zhang, Yunyun; Yan, Jing; Fu, Yi; Chen, Shengdi
2013-01-01
Objective To compare the accuracy of formula 1/2ABC with 2/3SH on volume estimation for hypertensive infratentorial hematoma. Methods One hundred and forty-seven CT scans diagnosed as hypertensive infratentorial hemorrhage were reviewed. Based on the shape, hematomas were categorized as regular or irregular. Multilobular was defined as a special shape of irregular. Hematoma volume was calculated employing computer-assisted volumetric analysis (CAVA), 1/2ABC and 2/3SH, respectively. Results The correlation coefficients between 1/2ABC (or 2/3SH) and CAVA were greater than 0.900 in all subgroups. There were neither significant differences in absolute values of volume deviation nor percentage deviation between 1/2ABC and 2/3SH for regular hemorrhage (P>0.05). While for cerebellar, brainstem and irregular hemorrhages, the absolute values of volume deviation and percentage deviation by formula 1/2ABC were greater than 2/3SH (P<0.05). 1/2ABC and 2/3SH underestimated hematoma volume each by 10% and 5% for cerebellar hemorrhage, 14% and 9% for brainstem hemorrhage, 19% and 16% for regular hemorrhage, 9% and 3% for irregular hemorrhage, respectively. In addition, for the multilobular hemorrhage, 1/2ABC underestimated the volume by 9% while 2/3SH overestimated it by 2%. Conclusions For regular hemorrhage volume calculation, the accuracy of 2/3SH is similar to 1/2ABC. While for cerebellar, brainstem or irregular hemorrhages (including multilobular), 2/3SH is more accurate than 1/2ABC. PMID:23638025
2014-01-01
Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231
Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin
2014-04-28
It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
Generalized Majority Logic Criterion to Analyze the Statistical Strength of S-Boxes
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-05-01
The majority logic criterion is applicable in the evaluation process of substitution boxes used in the advanced encryption standard (AES). The performance of modified or advanced substitution boxes is predicted by processing the results of statistical analysis by the majority logic criteria. In this paper, we use the majority logic criteria to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, the majority logic criterion is applied to AES, affine power affine (APA), Gray, Lui J, residue prime, S8 AES, Skipjack, and Xyi substitution boxes. The majority logic criterion is further extended into a generalized majority logic criterion which has a broader spectrum of analyzing the effectiveness of substitution boxes in image encryption applications. The integral components of the statistical analyses used for the generalized majority logic criterion are derived from results of entropy analysis, contrast analysis, correlation analysis, homogeneity analysis, energy analysis, and mean of absolute deviation (MAD) analysis.
NASA Astrophysics Data System (ADS)
Zhu, Yu; Liu, Zhigang; Deng, Wen; Deng, Zhongwen
2018-05-01
Frequency-scanning interferometry (FSI) using an external cavity diode laser (ECDL) is essential for many applications of the absolute distance measurement. However, owing to the hysteresis and creep of the piezoelectric actuator inherent in the ECDL, the optical frequency scanning exhibits a nonlinearity that seriously affects the phase extraction accuracy of the interference signal and results in the reduction of the measurement accuracy. To suppress the optical frequency nonlinearity, a harmonic frequency synthesis method for shaping the desired input signal instead of the original triangular wave is presented. The effectiveness of the presented shaping method is demonstrated through the comparison of the experimental results. Compared with an incremental Renishaw interferometer, the standard deviation of the displacement measurement of the FSI system is less than 2.4 μm when driven by the shaped signal.
Lauback, R G; Balitz, D F; Mays, D L
1976-05-01
An improved gas chromatographic method is described for the simultaneous determination of carboxylic acid chlorides and related carboxylic acids used in the production of some commercial semisynthetic penicillins. The acid chloride reacts with diethylamine to form the corresponding diethylamide. Carboxylic acid impurities are converted to trimethylsilyl esters. The two derivatives are separated and quantitated in the same chromatographic run. This method, an extension of the earlier procedure of Hishta and Bomstein (1), has been applied to the acid chlorides used to make oxacillin, cloxacillin, dicloxacillin, and methicillin (Figure 1); it shows promise of application to other acid chlorides. The determination is more selective than the usual titration methods, which do not differentiate among acids with similar pK's. Relative standard deviations of the acid chloride determination are 1.0-2.5%. Residual carboxylic acid can be repetitively determined within a range of 0.6% absolute.
Rao, R R; Chatt, A
1991-07-01
A simple preconcentration neutron activation analysis (PNAA) method has been developed for the determination of low levels of iodine in biological and nutritional materials. The method involves dissolution of the samples by microwave digestion in the presence of acids in closed Teflon bombs and preconcentration of total iodine, after reduction to iodide with hydrazine sulfate, by coprecipitation with bismuth sulfide. The effects of different factors such as acidity, time for complete precipitation, and concentrations of bismuth, sulfide, and diverse ions on the quantitative recovery of iodide have been studied. The absolute detection limit of the PNAA method is 5 ng of iodine. Precision of measurement, expressed in terms of relative standard deviation, is about 5% at 100 ppb and 10% at 20 ppb levels of iodine. The PNAA method has been applied to several biological reference materials and total diet samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gundlach-Graham, Alexander W.; Dennis, Elise; Ray, Steven J.
An inductively coupled plasma distance-of-flight mass spectrometer (ICP-DOFMS) has been coupled with laser-ablation (LA) sample introduction for the elemental analysis of solids. ICP-DOFMS is well suited for the analysis of laser-generated aerosols because it offers both high-speed mass analysis and simultaneous multi-elemental detection. Here, we evaluate the analytical performance of the LA-ICP-DOFMS instrument, equipped with a microchannel plate-based imaging detector, for the measurement of steady-state LA signals, as well as transient signals produced from single LA events. Steady-state detection limits are 1 mg g1, and absolute single-pulse LA detection limits are 200 fg for uranium; the system is shown capablemore » of performing time-resolved single-pulse LA analysis. By leveraging the benefits of simultaneous multi-elemental detection, we also attain a good shot-to-shot reproducibility of 6% relative standard deviation (RSD) and isotope-ratio precision of 0.3% RSD with a 10 s integration time.« less
High-precision method of binocular camera calibration with a distortion model.
Li, Weimin; Shan, Siyu; Liu, Hui
2017-03-10
A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.
Creating Situational Awareness in Spacecraft Operations with the Machine Learning Approach
NASA Astrophysics Data System (ADS)
Li, Z.
2016-09-01
This paper presents a machine learning approach for the situational awareness capability in spacecraft operations. There are two types of time dependent data patterns for spacecraft datasets: the absolute time pattern (ATP) and the relative time pattern (RTP). The machine learning captures the data patterns of the satellite datasets through the data training during the normal operations, which is represented by its time dependent trend. The data monitoring compares the values of the incoming data with the predictions of machine learning algorithm, which can detect any meaningful changes to a dataset above the noise level. If the difference between the value of incoming telemetry and the machine learning prediction are larger than the threshold defined by the standard deviation of datasets, it could indicate the potential anomaly that may need special attention. The application of the machine-learning approach to the Advanced Himawari Imager (AHI) on Japanese Himawari spacecraft series is presented, which has the same configuration as the Advanced Baseline Imager (ABI) on Geostationary Environment Operational Satellite (GOES) R series. The time dependent trends generated by the data-training algorithm are in excellent agreement with the datasets. The standard deviation in the time dependent trend provides a metric for measuring the data quality, which is particularly useful in evaluating the detector quality for both AHI and ABI with multiple detectors in each channel. The machine-learning approach creates the situational awareness capability, and enables engineers to handle the huge data volume that would have been impossible with the existing approach, and it leads to significant advances to more dynamic, proactive, and autonomous spacecraft operations.
Multicentre dose audit for clinical trials of radiation therapy in Asia.
Mizuno, Hideyuki; Fukuda, Shigekazu; Fukumura, Akifumi; Nakamura, Yuzuru-Kutsutani; Jianping, Cao; Cho, Chul-Koo; Supriana, Nana; Dung, To Anh; Calaguas, Miriam Joy; Devi, C R Beena; Chansilpa, Yaowalak; Banu, Parvin Akhter; Riaz, Masooma; Esentayeva, Surya; Kato, Shingo; Karasawa, Kumiko; Tsujii, Hirohiko
2017-05-01
A dose audit of 16 facilities in 11 countries has been performed within the framework of the Forum for Nuclear Cooperation in Asia (FNCA) quality assurance program. The quality of radiation dosimetry varies because of the large variation in radiation therapy among the participating countries. One of the most important aspects of international multicentre clinical trials is uniformity of absolute dose between centres. The National Institute of Radiological Sciences (NIRS) in Japan has conducted a dose audit of participating countries since 2006 by using radiophotoluminescent glass dosimeters (RGDs). RGDs have been successfully applied to a domestic postal dose audit in Japan. The authors used the same audit system to perform a dose audit of the FNCA countries. The average and standard deviation of the relative deviation between the measured and intended dose among 46 beams was 0.4% and 1.5% (k = 1), respectively. This is an excellent level of uniformity for the multicountry data. However, of the 46 beams measured, a single beam exceeded the permitted tolerance level of ±5%. We investigated the cause for this and solved the problem. This event highlights the importance of external audits in radiation therapy. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Staerk, L; Gerds, T A; Lip, G Y H; Ozenne, B; Bonde, A N; Lamberts, M; Fosbøl, E L; Torp-Pedersen, C; Gislason, G H; Olesen, J B
2018-01-01
Comparative data of non-vitamin K antagonist oral anticoagulants (NOAC) are lacking in patients with atrial fibrillation (AF). We compared effectiveness and safety of standard and reduced dose NOAC in AF patients. Using Danish nationwide registries, we included all oral anticoagulant-naïve AF patients who initiated NOAC treatment (2012-2016). Outcome-specific and mortality-specific multiple Cox regressions were combined to compute average treatment effects as 1-year standardized differences in stroke and bleeding risks (g-formula). Amongst 31 522 AF patients, the distribution of NOAC/dose was as follows: dabigatran standard dose (22.4%), dabigatran-reduced dose (14.0%), rivaroxaban standard dose (21.8%), rivaroxaban reduced dose (6.7%), apixaban standard dose (22.9%), and apixaban reduced dose (12.2%). The 1-year standardized absolute risks of stroke/thromboembolism were 1.73-1.98% and 2.51-2.78% with standard and reduced NOAC dose, respectively, without statistically significant differences between NOACs for given dose level. Comparing standard doses, the 1-year standardized absolute risk (95% CI) for major bleeding was for rivaroxaban 2.78% (2.42-3.17%); corresponding absolute risk differences (95% CI) were for dabigatran -0.93% (-1.45% to -0.38%) and apixaban, -0.54% (-0.99% to -0.05%). The results for major bleeding were similar for reduced NOAC dose. The 1-year standardized absolute risk (95% CI) for intracranial bleeding was for standard dose dabigatran 0.19% (0.22-0.50%); corresponding absolute risk differences (95% CI) were for rivaroxaban 0.23% (0.06-0.41%) and apixaban, 0.18% (0.01-0.34%). Standard and reduced dose NOACs, respectively, showed no significant risk difference for associated stroke/thromboembolism. Rivaroxaban was associated with higher bleeding risk compared with dabigatran and apixaban and dabigatran was associated with lower intracranial bleeding risk compared with rivaroxaban and apixaban. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Carvalho, Nathalia F; Pliego, Josefredo R
2015-10-28
Absolute single-ion solvation free energy is a very useful property for understanding solution phase chemistry. The real solvation free energy of an ion depends on its interaction with the solvent molecules and on the net potential inside the solute cavity. The tetraphenyl arsonium-tetraphenyl borate (TATB) assumption as well as the cluster-continuum quasichemical theory (CC-QCT) approach for Li(+) solvation allows access to a solvation scale excluding the net potential. We have determined this free energy scale investigating the solvation of the lithium ion in water (H2O), acetonitrile (CH3CN) and dimethyl sulfoxide (DMSO) solvents via the CC-QCT approach. Our calculations at the MP2 and MP4 levels with basis sets up to the QZVPP+diff quality, and including solvation of the clusters and solvent molecules by the dielectric continuum SMD method, predict the solvation free energy of Li(+) as -116.1, -120.6 and -123.6 kcal mol(-1) in H2O, CH3CN and DMSO solvents, respectively (1 mol L(-1) standard state). These values are compatible with the solvation free energy of the proton of -253.4, -253.2 and -261.1 kcal mol(-1) in H2O, CH3CN and DMSO solvents, respectively. Deviations from the experimental TATB scale are only 1.3 kcal mol(-1) in H2O and 1.8 kcal mol(-1) in DMSO solvents. However, in the case of CH3CN, the deviation reaches a value of 9.2 kcal mol(-1). The present study suggests that the experimental TATB scale is inconsistent for CH3CN. A total of 125 values of the solvation free energy of ions in these three solvents were obtained. These new data should be useful for the development of theoretical solvation models.
Sharifi, Amin; Varsavsky, Andrea; Ulloa, Johanna; Horsburgh, Jodie C.; McAuley, Sybil A.; Krishnamurthy, Balasubramanian; Jenkins, Alicia J.; Colman, Peter G.; Ward, Glenn M.; MacIsaac, Richard J.; Shah, Rajiv; O’Neal, David N.
2015-01-01
Background: Current electrochemical glucose sensors use a single electrode. Multiple electrodes (redundancy) may enhance sensor performance. We evaluated an electrochemical redundant sensor (ERS) incorporating two working electrodes (WE1 and WE2) onto a single subcutaneous insertion platform with a processing algorithm providing a single real-time continuous glucose measure. Methods: Twenty-three adults with type 1 diabetes each wore two ERSs concurrently for 168 hours. Post-insertion a frequent sampling test (FST) was performed with ERS benchmarked against a glucose meter (Bayer Contour Link). Day 4 and 7 FSTs were performed with a standard meal and venous blood collected for reference glucose measurements (YSI and meter). Between visits, ERS was worn with capillary blood glucose testing ≥8 times/day. Sensor glucose data were processed prospectively. Results: Mean absolute relative deviation (MARD) for ERS day 1-7 (3,297 paired points with glucose meter) was (mean [SD]) 10.1 [11.5]% versus 11.4 [11.9]% for WE1 and 12.0 [11.9]% for WE2; P < .0001. ERS Clarke A and A+B were 90.2% and 99.8%, respectively. ERS day 4 plus day 7 MARD (1,237 pairs with YSI) was 9.4 [9.5]% versus 9.6 [9.7]% for WE1 and 9.9 [9.7]% for WE2; P = ns. ERS day 1-7 precision absolute relative deviation (PARD) was 9.9 [3.6]% versus 11.5 [6.2]% for WE1 and 10.1 [4.4]% for WE2; P = ns. ERS sensor display time was 97.8 [6.0]% versus 91.0 [22.3]% for WE1 and 94.1 [14.3]% for WE2; P < .05. Conclusions: Electrochemical redundancy enhances glucose sensor accuracy and display time compared with each individual sensing element alone. ERS performance compares favorably with ‘best-in-class’ of non-redundant sensors. PMID:26499476
Extensive TD-DFT Benchmark: Singlet-Excited States of Organic Molecules.
Jacquemin, Denis; Wathelet, Valérie; Perpète, Eric A; Adamo, Carlo
2009-09-08
Extensive Time-Dependent Density Functional Theory (TD-DFT) calculations have been carried out in order to obtain a statistically meaningful analysis of the merits of a large number of functionals. To reach this goal, a very extended set of molecules (∼500 compounds, >700 excited states) covering a broad range of (bio)organic molecules and dyes have been investigated. Likewise, 29 functionals including LDA, GGA, meta-GGA, global hybrids, and long-range-corrected hybrids have been considered. Comparisons with both theoretical references and experimental measurements have been carried out. On average, the functionals providing the best match with reference data are, one the one hand, global hybrids containing between 22% and 25% of exact exchange (X3LYP, B98, PBE0, and mPW1PW91) and, on the other hand, a long-range-corrected hybrid with a less-rapidly increasing HF ratio, namely LC-ωPBE(20). Pure functionals tend to be less consistent, whereas functionals incorporating a larger fraction of exact exchange tend to underestimate significantly the transition energies. For most treated cases, the M05 and CAM-B3LYP schemes deliver fairly small deviations but do not outperform standard hybrids such as X3LYP or PBE0, at least within the vertical approximation. With the optimal functionals, one obtains mean absolute deviations smaller than 0.25 eV, though the errors significantly depend on the subset of molecules or states considered. As an illustration, PBE0 and LC-ωPBE(20) provide a mean absolute error of only 0.14 eV for the 228 states related to neutral organic dyes but are completely off target for cyanine-like derivatives. On the basis of comparisons with theoretical estimates, it also turned out that CC2 and TD-DFT errors are of the same order of magnitude, once the above-mentioned hybrids are selected.
Sharifi, Amin; Varsavsky, Andrea; Ulloa, Johanna; Horsburgh, Jodie C; McAuley, Sybil A; Krishnamurthy, Balasubramanian; Jenkins, Alicia J; Colman, Peter G; Ward, Glenn M; MacIsaac, Richard J; Shah, Rajiv; O'Neal, David N
2016-05-01
Current electrochemical glucose sensors use a single electrode. Multiple electrodes (redundancy) may enhance sensor performance. We evaluated an electrochemical redundant sensor (ERS) incorporating two working electrodes (WE1 and WE2) onto a single subcutaneous insertion platform with a processing algorithm providing a single real-time continuous glucose measure. Twenty-three adults with type 1 diabetes each wore two ERSs concurrently for 168 hours. Post-insertion a frequent sampling test (FST) was performed with ERS benchmarked against a glucose meter (Bayer Contour Link). Day 4 and 7 FSTs were performed with a standard meal and venous blood collected for reference glucose measurements (YSI and meter). Between visits, ERS was worn with capillary blood glucose testing ≥8 times/day. Sensor glucose data were processed prospectively. Mean absolute relative deviation (MARD) for ERS day 1-7 (3,297 paired points with glucose meter) was (mean [SD]) 10.1 [11.5]% versus 11.4 [11.9]% for WE1 and 12.0 [11.9]% for WE2; P < .0001. ERS Clarke A and A+B were 90.2% and 99.8%, respectively. ERS day 4 plus day 7 MARD (1,237 pairs with YSI) was 9.4 [9.5]% versus 9.6 [9.7]% for WE1 and 9.9 [9.7]% for WE2; P = ns. ERS day 1-7 precision absolute relative deviation (PARD) was 9.9 [3.6]% versus 11.5 [6.2]% for WE1 and 10.1 [4.4]% for WE2; P = ns. ERS sensor display time was 97.8 [6.0]% versus 91.0 [22.3]% for WE1 and 94.1 [14.3]% for WE2; P < .05. Electrochemical redundancy enhances glucose sensor accuracy and display time compared with each individual sensing element alone. ERS performance compares favorably with 'best-in-class' of non-redundant sensors. © 2015 Diabetes Technology Society.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Johnson, Craig W; Johnson, Ronald; Kim, Mira; McKee, John C
2009-11-01
During 2004 and 2005 orientations, all 187 and 188 new matriculates, respectively, in two southwestern U.S. nursing schools completed Personal Background and Preparation Surveys (PBPS) in the first predictive validity study of a diagnostic and prescriptive instrument for averting adverse academic status events (AASE) among nursing or health science professional students. One standard deviation increases in PBPS risks (p < 0.05) multiplied odds of first-year or second-year AASE by approximately 150%, controlling for school affiliation and underrepresented minority student (URMS) status. AASE odds one standard deviation above mean were 216% to 250% those one standard deviation below mean. Odds of first-year or second-year AASE for URMS one standard deviation above the 2004 PBPS mean were 587% those for non-URMS one standard deviation below mean. The PBPS consistently and significantly facilitated early identification of nursing students at risk for AASE, enabling proactive targeting of interventions for risk amelioration and AASE or attrition prevention. Copyright 2009, SLACK Incorporated.
Hahn, David K; RaghuVeer, Krishans; Ortiz, J V
2014-05-15
Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.
Demonstration of the Gore Module for Passive Ground Water Sampling
2014-06-01
ix ACRONYMS AND ABBREVIATIONS % RSD percent relative standard deviation 12DCA 1,2-dichloroethane 112TCA 1,1,2-trichloroethane 1122TetCA...Analysis of Variance ROD Record of Decision RSD relative standard deviation SBR Southern Bush River SVOC semi-volatile organic compound...replicate samples had a relative standard deviation ( RSD ) that was 20% or less. For the remaining analytes (PCE, cDCE, and chloroform), at least 70
Wang, Anxin; Li, Zhifang; Yang, Yuling; Chen, Guojuan; Wang, Chunxue; Wu, Yuntao; Ruan, Chunyu; Liu, Yan; Wang, Yilong; Wu, Shouling
2016-01-01
To investigate the relationship between baseline systolic blood pressure (SBP) and visit-to-visit blood pressure variability in a general population. This is a prospective longitudinal cohort study on cardiovascular risk factors and cardiovascular or cerebrovascular events. Study participants attended a face-to-face interview every 2 years. Blood pressure variability was defined using the standard deviation and coefficient of variation of all SBP values at baseline and follow-up visits. The coefficient of variation is the ratio of the standard deviation to the mean SBP. We used multivariate linear regression models to test the relationships between SBP and standard deviation, and between SBP and coefficient of variation. Approximately 43,360 participants (mean age: 48.2±11.5 years) were selected. In multivariate analysis, after adjustment for potential confounders, baseline SBPs <120 mmHg were inversely related to standard deviation (P<0.001) and coefficient of variation (P<0.001). In contrast, baseline SBPs ≥140 mmHg were significantly positively associated with standard deviation (P<0.001) and coefficient of variation (P<0.001). Baseline SBPs of 120-140 mmHg were associated with the lowest standard deviation and coefficient of variation. The associations between baseline SBP and standard deviation, and between SBP and coefficient of variation during follow-ups showed a U curve. Both lower and higher baseline SBPs were associated with increased blood pressure variability. To control blood pressure variability, a good target SBP range for a general population might be 120-139 mmHg.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
Estimation of the neural drive to the muscle from surface electromyograms
NASA Astrophysics Data System (ADS)
Hofmann, David
Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
Visual Field Defects and Retinal Ganglion Cell Losses in Human Glaucoma Patients
Harwerth, Ronald S.; Quigley, Harry A.
2007-01-01
Objective The depth of visual field defects are correlated with retinal ganglion cell densities in experimental glaucoma. This study was to determine whether a similar structure-function relationship holds for human glaucoma. Methods The study was based on retinal ganglion cell densities and visual thresholds of patients with documented glaucoma (Kerrigan-Baumrind, et al.) The data were analyzed by a model that predicted ganglion cell densities from standard clinical perimetry, which were then compared to histologic cell counts. Results The model, without free parameters, produced accurate and relatively precise quantification of ganglion cell densities associated with visual field defects. For 437 sets of data, the unity correlation for predicted vs. measured cell densities had a coefficient of determination of 0.39. The mean absolute deviation of the predicted vs. measured values was 2.59 dB, the mean and SD of the distribution of residual errors of prediction was -0.26 ± 3.22 dB. Conclusions Visual field defects by standard clinical perimetry are proportional to neural losses caused by glaucoma. Clinical Relevance The evidence for quantitative structure-function relationships provides a scientific basis of interpreting glaucomatous neuropathy from visual thresholds and supports the application of standard perimetry to establish the stage of the disease. PMID:16769839
Hesse, Almut
2016-01-01
Amino acid analysis is considered to be the gold standard for quantitative peptide and protein analysis. Here, we would like to propose a simple HPLC/UV method based on a reversed-phase separation of the aromatic amino acids tyrosine (Tyr), phenylalanine (Phe), and optionally tryptophan (Trp) without any derivatization. The hydrolysis of the proteins and peptides was performed by an accelerated microwave technique, which needs only 30 minutes. Two internal standard compounds, homotyrosine (HTyr) and 4-fluorophenylalanine (FPhe) were used for calibration. The limit of detection (LOD) was estimated to be 0.05 µM (~10 µg/L) for tyrosine and phenylalanine at 215 nm. The LOD for a protein determination was calculated to be below 16 mg/L (~300 ng BSA absolute). Aromatic amino acid analysis (AAAA) offers excellent accuracy and a precision of about 5% relative standard deviation, including the hydrolysis step. The method was validated with certified reference materials (CRM) of amino acids and of a pure protein (bovine serum albumin, BSA). AAAA can be used for the quantification of aromatic amino acids, isolated peptides or proteins, complex peptide or protein samples, such as serum or milk powder, and peptides or proteins immobilized on solid supports. PMID:27559481
Yang, Shuai; Liu, Ying
2018-08-01
Liquid crystal nematic elastomers are one kind of smart anisotropic and viscoelastic solids simultaneously combing the properties of rubber and liquid crystals, which is thermal sensitivity. In this paper, the wave dispersion in a liquid crystal nematic elastomer porous phononic crystal subjected to an external thermal stimulus is theoretically investigated. Firstly, an energy function is proposed to determine thermo-induced deformation in NE periodic structures. Based on this function, thermo-induced band variation in liquid crystal nematic elastomer porous phononic crystals is investigated in detail. The results show that when liquid crystal elastomer changes from nematic state to isotropic state due to the variation of the temperature, the absolute band gaps at different bands are opened or closed. There exists a threshold temperature above which the absolute band gaps are opened or closed. Larger porosity benefits the opening of the absolute band gaps. The deviation of director from the structural symmetry axis is advantageous for the absolute band gap opening in nematic state whist constrains the absolute band gap opening in isotropic state. The combination effect of temperature and director orientation provides an added degree of freedom in the intelligent tuning of the absolute band gaps in phononic crystals. Copyright © 2018 Elsevier B.V. All rights reserved.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Estimation of the lower flammability limit of organic compounds as a function of temperature.
Rowley, J R; Rowley, R L; Wilding, W V
2011-02-15
A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.
Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.
McClure, Foster D; Lee, Jung K
2006-01-01
A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.
SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Wu, G; Gao, Y
2016-06-15
Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less
Metrological activity determination of 133Ba by sum-peak absolute method
NASA Astrophysics Data System (ADS)
da Silva, R. L.; de Almeida, M. C. M.; Delgado, J. U.; Poledna, R.; Santos, A.; de Veras, E. V.; Rangel, J.; Trindade, O. L.
2016-07-01
The National Laboratory for Metrology of Ionizing Radiation provides gamma sources of radionuclide and standardized in activity with reduced uncertainties. Relative methods require standards to determine the sample activity while the absolute methods, as sum-peak, not. The activity is obtained directly with good accuracy and low uncertainties. 133Ba is used in research laboratories and on calibration of detectors for analysis in different work areas. Classical absolute methods don't calibrate 133Ba due to its complex decay scheme. The sum-peak method using gamma spectrometry with germanium detector standardizes 133Ba samples. Uncertainties lower than 1% to activity results were obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornejo, Juan Carlos
The Standard Model has been a theory with the greatest success in describing the fundamental interactions of particles. As of the writing of this dissertation, the Standard Model has not been shown to make a false prediction. However, the limitations of the Standard Model have long been suspected by its lack of a description of gravity, nor dark matter. Its largest challenge to date, has been the observation of neutrino oscillations, and the implication that they may not be massless, as required by the Standard Model. The growing consensus is that the Standard Model is simply a lower energy effectivemore » field theory, and that new physics lies at much higher energies. The Q weak Experiment is testing the Electroweak theory of the Standard Model by making a precise determination of the weak charge of the proton (Q p w). Any signs of "new physics" will appear as a deviation to the Standard Model prediction. The weak charge is determined via a precise measurement of the parity-violating asymmetry of the electron-proton interaction via elastic scattering of a longitudinally polarized electron beam of an un-polarized proton target. The experiment required that the electron beam polarization be measured to an absolute uncertainty of 1%. At this level the electron beam polarization was projected to contribute the single largest experimental uncertainty to the parity-violating asymmetry measurement. This dissertation will detail the use of Compton scattering to determine the electron beam polarization via the detection of the scattered photon. I will conclude the remainder of the dissertation with an independent analysis of the blinded Q weak.« less
Inter-comparison of precipitable water among reanalyses and its effect on downscaling in the tropics
NASA Astrophysics Data System (ADS)
Takahashi, H. G.; Fujita, M.; Hara, M.
2012-12-01
This paper compared precipitable water (PW) among four major reanalyses. In addition, we also investigated the effect of the boundary conditions on downscaling in the tropics, using a regional climate model. The spatial pattern of PW in the reanalyses agreed closely with observations. However, the absolute amounts of PW in some reanalyses were very small compared to observations. The discrepancies of the 12-year mean PW in July over the Southeast Asian monsoon region exceeded the inter-annual standard deviation of the PW. There was also a discrepancy in tropical PWs throughout the year, an indication that the problem is not regional, but global. The downscaling experiments were conducted, which were forced by the different four reanalyses. The atmospheric circulation, including monsoon westerlies and various disturbances, was very small among the reanalyses. However, simulated precipitation was only 60 % of observed precipitation, although the dry bias in the boundary conditions was only 6 %. This result indicates that dry bias has large effects on precipitation in downscaling over the tropics. This suggests that a simulated regional climate downscaled from ensemble-mean boundary conditions is quite different from an ensemble-mean regional climate averaged over the several regional ones downscaled from boundary conditions of the ensemble members in the tropics. Downscaled models can provide realistic simulations of regional tropical climates only if the boundary conditions include realistic absolute amounts of PW. Use of boundary conditions that include realistic absolute amounts of PW in downscaling in the tropics is imperative at the present time. This work was partly supported by the Global Environment Research Fund (RFa-1101) of the Ministry of the Environment, Japan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obeid, L; Esteve, F; Adam, J
2014-06-15
Purpose: Synchrotron stereotactic radiotherapy (SSRT) is an innovative treatment combining the selective accumulation of heavy elements in tumors with stereotactic irradiations using monochromatic medium energy x-rays from a synchrotron source. Phase I/II clinical trials on brain metastasis are underway using venous infusion of iodinated contrast agents. The radiation dose enhancement depends on the amount of iodine in the tumor and its time course. In the present study, the reproducibility of iodine concentrations between the CT planning scan day (Day 0) and the treatment day (Day 10) was assessed in order to predict dose errors. Methods: For each of days 0more » and 10, three patients received a biphasic intravenous injection of iodinated contrast agent (40 ml, 4 ml/s, followed by 160 ml, 0.5 ml/s) in order to ensure stable intra-tumoral amounts of iodine during the treatment. Two volumetric CT scans (before and after iodine injection) and a multi-slice dynamic CT of the brain were performed using conventional radiotherapy CT (Day 0) or quantitative synchrotron radiation CT (Day 10). A 3D rigid registration was processed between images. The absolute and relative differences of absolute iodine concentrations and their corresponding dose errors were evaluated in the GTV and PTV used for treatment planning. Results: The differences in iodine concentrations remained within the standard deviation limits. The 3D absolute differences followed a normal distribution centered at zero mg/ml with a variance (∼1 mg/ml) which is related to the image noise. Conclusion: The results suggest that dose errors depend only on the image noise. This study shows that stable amounts of iodine are achievable in brain metastasis for SSRT treatment in a 10 days interval.« less
Wearable Vector Electrical Bioimpedance System to Assess Knee Joint Health
Hersek, Sinan; Töreyin, Hakan; Teague, Caitlin N.; Millard-Stafford, Mindy L.; Jeong, Hyeon-Ki; Bavare, Miheer M.; Wolkoff, Paul; Sawka, Michael N.; Inan, Omer T.
2017-01-01
Objective We designed and validated a portable electrical bioimpedance (EBI) system to quantify knee joint health. Methods Five separate experiments were performed to demonstrate the: (1) ability of the EBI system to assess knee injury and recovery; (2) inter-day variability of knee EBI measurements; (3) sensitivity of the system to small changes in interstitial fluid volume; (4) reducing the error of EBI measurements using acceleration signals; (5) use of the system with dry electrodes integrated to a wearable knee wrap. Results (1) The absolute difference in resistance (R) and reactance (X) from the left to the right knee was able to distinguish injured and healthy knees (p<0.05); the absolute difference in R decreased significantly (p<0.05) in injured subjects following rehabilitation. (2) The average inter-day variability (standard deviation) of the absolute difference in knee R was 2.5Ω, and for X was, 1.2 Ω. (3) Local heating/cooling resulted in a significant decrease/increase in knee R (p<0.01). (4) The proposed subject position detection algorithm achieved 97.4% leave-one subject out cross-validated accuracy and 98.2% precision in detecting when the subject is in the correct position to take measurements. (5) Linear regression between the knee R and X measured using the wet electrodes and the designed wearable knee wrap were highly correlated (r2 = 0.8 and 0.9, respectively). Conclusion This work demonstrates the use of wearable EBI measurements in monitoring knee joint health. Significance The proposed wearable system has the potential for assessing knee joint health outside the clinic/lab and help guide rehabilitation. PMID:28026745
Wearable Vector Electrical Bioimpedance System to Assess Knee Joint Health.
Hersek, Sinan; Toreyin, Hakan; Teague, Caitlin N; Millard-Stafford, Mindy L; Jeong, Hyeon-Ki; Bavare, Miheer M; Wolkoff, Paul; Sawka, Michael N; Inan, Omer T
2017-10-01
We designed and validated a portable electrical bioimpedance (EBI) system to quantify knee joint health. Five separate experiments were performed to demonstrate the: 1) ability of the EBI system to assess knee injury and recovery; 2) interday variability of knee EBI measurements; 3) sensitivity of the system to small changes in interstitial fluid volume; 4) reducing the error of EBI measurements using acceleration signals; and 5) use of the system with dry electrodes integrated to a wearable knee wrap. 1) The absolute difference in resistance ( R) and reactance (X) from the left to the right knee was able to distinguish injured and healthy knees (p < 0.05); the absolute difference in R decreased significantly (p < 0.05) in injured subjects following rehabilitation. 2) The average interday variability (standard deviation) of the absolute difference in knee R was 2.5 Ω and for X was 1.2 Ω. 3) Local heating/cooling resulted in a significant decrease/increase in knee R (p < 0.01). 4) The proposed subject position detection algorithm achieved 97.4% leave-one subject out cross-validated accuracy and 98.2% precision in detecting when the subject is in the correct position to take measurements. 5) Linear regression between the knee R and X measured using the wet electrodes and the designed wearable knee wrap were highly correlated ( R 2 = 0.8 and 0.9, respectively). This study demonstrates the use of wearable EBI measurements in monitoring knee joint health. The proposed wearable system has the potential for assessing knee joint health outside the clinic/lab and help guide rehabilitation.
Laser interferometry method for absolute measurement of the acceleration of gravity
NASA Technical Reports Server (NTRS)
Hudson, O. K.
1971-01-01
Gravimeter permits more accurate and precise absolute measurement of g without reference to Potsdam values as absolute standards. Device is basically Michelson laser beam interferometer in which one arm is mass fitted with corner cube reflector.
Uncertainties in Climatological Seawater Density Calculations
NASA Astrophysics Data System (ADS)
Dai, Hao; Zhang, Xining
2018-03-01
In most applications, with seawater conductivity, temperature, and pressure data measured in situ by various observation instruments e.g., Conductivity-Temperature-Depth instruments (CTD), the density which has strong ties to ocean dynamics and so on is computed according to equations of state for seawater. This paper, based on density computational formulae in the Thermodynamic Equation of Seawater 2010 (TEOS-10), follows the Guide of the expression of Uncertainty in Measurement (GUM) and assesses the main sources of uncertainties. By virtue of climatological decades-average temperature/Practical Salinity/pressure data sets in the global ocean provided by the National Oceanic and Atmospheric Administration (NOAA), correlation coefficients between uncertainty sources are determined and the combined standard uncertainties uc>(ρ>) in seawater density calculations are evaluated. For grid points in the world ocean with 0.25° resolution, the standard deviations of uc>(ρ>) in vertical profiles cover the magnitude order of 10-4 kg m-3. The uc>(ρ>) means in vertical profiles of the Baltic Sea are about 0.028kg m-3 due to the larger scatter of Absolute Salinity anomaly. The distribution of the uc>(ρ>) means in vertical profiles of the world ocean except for the Baltic Sea, which covers the range of >(0.004,0.01>) kg m-3, is related to the correlation coefficient r>(SA,p>) between Absolute Salinity SA and pressure p. The results in the paper are based on sensors' measuring uncertainties of high accuracy CTD. Larger uncertainties in density calculations may arise if connected with lower sensors' specifications. This work may provide valuable uncertainty information required for reliability considerations of ocean circulation and global climate models.
Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David
2015-01-01
Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778
Setford, Steven; Grady, Mike; Mackintosh, Stephen; Donald, Robert; Levy, Brian
2018-05-01
MARD (mean absolute relative difference) is increasingly used to describe performance of glucose monitoring systems, providing a single-value quantitative measure of accuracy and allowing comparisons between different monitoring systems. This study reports MARDs for the OneTouch Verio® glucose meter clinical data set of 80 258 data points (671 individual batches) gathered as part of a 7.5-year self-surveillance program Methods: Test strips were routinely sampled from randomly selected manufacturer's production batches and sent to one of 3 clinic sites for clinical accuracy assessment using fresh capillary blood from patients with diabetes, using both the meter system and standard laboratory reference instrument. Evaluation of the distribution of strip batch MARD yielded a mean value of 5.05% (range: 3.68-6.43% at ±1.96 standard deviations from mean). The overall MARD for all clinic data points (N = 80 258) was also 5.05%, while a mean bias of 1.28 was recorded. MARD by glucose level was found to be consistent, yielding a maximum value of 4.81% at higher glucose (≥100 mg/dL) and a mean absolute difference (MAD) of 5.60 mg/dL at low glucose (<100 mg/dL). MARD by year of manufacture varied from 4.67-5.42% indicating consistent accuracy performance over the surveillance period. This 7.5-year surveillance program showed that this meter system exhibits consistently low MARD by batch, glucose level and year, indicating close agreement with established reference methods whilste exhibiting lower MARD values than continuous glucose monitoring (CGM) systems and providing users with confidence in the performance when transitioning to each new strip batch.
ERIC Educational Resources Information Center
Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.
2008-01-01
Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…
Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.
Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman
2013-02-01
This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.
Endo, Takao; Fujikado, Takashi; Hirota, Masakazu; Kanda, Hiroyuki; Morimoto, Takeshi; Nishida, Kohji
2018-04-20
To evaluate the improvement in targeted reaching movements toward targets of various contrasts in a patient implanted with a suprachoroidal-transretinal stimulation (STS) retinal prosthesis. An STS retinal prosthesis was implanted in the right eye of a 42-year-old man with advanced Stargardt disease (visual acuity: right eye, light perception; left eye, hand motion). In localization tests during the 1-year follow-up period, the patient attempted to touch the center of a white square target (visual angle, 10°; contrast, 96, 85, or 74%) displayed at a random position on a monitor. The distance between the touched point and the center of the target (the absolute deviation) was averaged over 20 trials with the STS system on or off. With the left eye occluded, the absolute deviation was not consistently lower with the system on than off for high-contrast (96%) targets, but was consistently lower with the system on for low-contrast (74%) targets. With both eyes open, the absolute deviation was consistently lower with the system on than off for 85%-contrast targets. With the system on and 96%-contrast targets, we detected a shorter response time while covering the right eye, which was being implanted with the STS, compared to covering the left eye (2.41 ± 2.52 vs 8.45 ± 3.78 s, p < 0.01). Performance of a reaching movement improved in a patient with an STS retinal prosthesis implanted in an eye with residual natural vision. Patients with a retinal prosthesis may be able to improve their visual performance by using both artificial vision and their residual natural vision. Beginning date of the trial: Feb. 20, 2014 Date of registration: Jan. 4, 2014 Trial registration number: UMIN000012754 Registration site: UMIN Clinical Trials Registry (UMIN-CTR) http://www.umin.ac.jp/ctr/index.htm.
ERIC Educational Resources Information Center
Kwon, Heekyung
2011-01-01
The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…
Packing Fraction of a Two-dimensional Eden Model with Random-Sized Particles
NASA Astrophysics Data System (ADS)
Kobayashi, Naoki; Yamazaki, Hiroshi
2018-01-01
We have performed a numerical simulation of a two-dimensional Eden model with random-size particles. In the present model, the particle radii are generated from a Gaussian distribution with mean μ and standard deviation σ. First, we have examined the bulk packing fraction for the Eden cluster and investigated the effects of the standard deviation and the total number of particles NT. We show that the bulk packing fraction depends on the number of particles and the standard deviation. In particular, for the dependence on the standard deviation, we have determined the asymptotic value of the bulk packing fraction in the limit of the dimensionless standard deviation. This value is larger than the packing fraction obtained in a previous study of the Eden model with uniform-size particles. Secondly, we have investigated the packing fraction of the entire Eden cluster including the effect of the interface fluctuation. We find that the entire packing fraction depends on the number of particles while it is independent of the standard deviation, in contrast to the bulk packing fraction. In a similar way to the bulk packing fraction, we have obtained the asymptotic value of the entire packing fraction in the limit NT → ∞. The obtained value of the entire packing fraction is smaller than that of the bulk value. This fact suggests that the interface fluctuation of the Eden cluster influences the packing fraction.
Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.
Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R
2016-11-01
Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 90.708 - Cumulative Sum (CumSum) procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is 5.0×σ, and is a function of the standard deviation, σ. σ=is the sample standard deviation and is... individual engine. FEL=Family Emission Limit (the standard if no FEL). F=.25×σ. (2) After each test pursuant...
Accuracy of noncycloplegic refraction performed at school screening camps.
Khurana, Rolli; Tibrewal, Shailja; Ganesh, Suma; Tarkar, Rajoo; Nguyen, Phuong Thi Thanh; Siddiqui, Zeeshan; Dasgupta, Shantanu
2018-06-01
The aim of this study was to compare noncycloplegic refraction performed in school camp with that performed in eye clinic in children aged 6-16 years. A prospective study of children with unaided vision <0.2 LogMAR who underwent noncycloplegic retinoscopy (NCR) and subjective refraction (SR) in camp and subsequently in eye clinic between February and March 2017 was performed. A masked optometrist performed refractions in both settings. The agreement between refraction values obtained at both settings was compared using the Bland-Altman analysis. A total of 217 eyes were included in this study. Between the school camp and eye clinic, the mean absolute error ± standard deviation in spherical equivalent (SE) of NCR was 0.33 ± 0.4D and that of SR was 0.26 ± 0.5D. The limits of agreement for NCR were +0.91D to - 1.09D and for SR was +1.15D to -1.06D. The mean absolute error in SE was ≤0.5D in 92.62% eyes (95% confidence interval 88%-95%). A certain degree of variability exists between noncycloplegic refraction done in school camps and eye clinic. It was found to be accurate within 0.5D of SE in 92.62% eyes for refractive errors up to 4.5D of myopia, 3D of cylinder, and 1.5D of hyperopia.
In situ nanoscale observations of gypsum dissolution by digital holographic microscopy.
Feng, Pan; Brand, Alexander S; Chen, Lei; Bullard, Jeffrey W
2017-06-01
Recent topography measurements of gypsum dissolution have not reported the absolute dissolution rates, but instead focus on the rates of formation and growth of etch pits. In this study, the in situ absolute retreat rates of gypsum (010) cleavage surfaces at etch pits, at cleavage steps, and at apparently defect-free portions of the surface are measured in flowing water by reflection digital holographic microscopy. Observations made on randomly sampled fields of view on seven different cleavage surfaces reveal a range of local dissolution rates, the local rate being determined by the topographical features at which material is removed. Four characteristic types of topographical activity are observed: 1) smooth regions, free of etch pits or other noticeable defects, where dissolution rates are relatively low; 2) shallow, wide etch pits bounded by faceted walls which grow gradually at rates somewhat greater than in smooth regions; 3) narrow, deep etch pits which form and grow throughout the observation period at rates that exceed those at the shallow etch pits; and 4) relatively few, submicrometer cleavage steps which move in a wave-like manner and yield local dissolution fluxes that are about five times greater than at etch pits. Molar dissolution rates at all topographical features except submicrometer steps can be aggregated into a continuous, mildly bimodal distribution with a mean of 3.0 µmolm -2 s -1 and a standard deviation of 0.7 µmolm -2 s -1 .
Röhnisch, Hanna E; Eriksson, Jan; Müllner, Elisabeth; Agback, Peter; Sandström, Corine; Moazzami, Ali A
2018-02-06
A key limiting step for high-throughput NMR-based metabolomics is the lack of rapid and accurate tools for absolute quantification of many metabolites. We developed, implemented, and evaluated an algorithm, AQuA (Automated Quantification Algorithm), for targeted metabolite quantification from complex 1 H NMR spectra. AQuA operates based on spectral data extracted from a library consisting of one standard calibration spectrum for each metabolite. It uses one preselected NMR signal per metabolite for determining absolute concentrations and does so by effectively accounting for interferences caused by other metabolites. AQuA was implemented and evaluated using experimental NMR spectra from human plasma. The accuracy of AQuA was tested and confirmed in comparison with a manual spectral fitting approach using the ChenomX software, in which 61 out of 67 metabolites quantified in 30 human plasma spectra showed a goodness-of-fit (r 2 ) close to or exceeding 0.9 between the two approaches. In addition, three quality indicators generated by AQuA, namely, occurrence, interference, and positional deviation, were studied. These quality indicators permit evaluation of the results each time the algorithm is operated. The efficiency was tested and confirmed by implementing AQuA for quantification of 67 metabolites in a large data set comprising 1342 experimental spectra from human plasma, in which the whole computation took less than 1 s.
NASA Technical Reports Server (NTRS)
King, Gary M.
1996-01-01
Methane oxidation associated with the belowground tissues of a common aquatic macrophyte, the burweed Sparganium euryearpum, was assayed in situ by a chamber technique with acetylene or methyl fluoride as a methanotrophic inhibitor at a headspace concentration of 3 to 4%. Acetylene and methyl fluoride inhibited both methane oxidation and peat methanogenesis. However, inhibition of methanogenesis resulted in no obvious short-term effect on methane fluxes. Since neither inhibitor adversely affected plant metabolism and both inhibited methanotrophy equally well, acetylene was employed for routine assays because of its low cost and ease of use. Root-associated methanotrophy consumed a variable but significant fraction of the total potential methane flux; values varied between 1 and 58% (mean +/- standard deviation, 27.0% +/- 6.0%), with no consistent temporal or spatial pattern during late summer. The absolute amount of methane oxidized was not correlated with the total potential methane flux; this suggested that parameters other than methane availability (e.g., oxygen availability) controlled the rates of methane oxidation. Estimates of diffusive methane flux and oxidation at the peat surface indicated that methane emission occurred primarily through aboveground plant tissues; the absolute magnitude of methane oxidation was also greater in association with roots than at the peat surface. However, the relative extent of oxidation was greater at the latter locus.
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
NASA Astrophysics Data System (ADS)
Krasnenko, N. P.; Kapegesheva, O. F.; Shamanaeva, L. G.
2017-11-01
Spatiotemporal dynamics of the standard deviations of three wind velocity components measured with a mini-sodar in the atmospheric boundary layer is analyzed. During the day on September 16 and at night on September 12 values of the standard deviation changed for the x- and y-components from 0.5 to 4 m/s, and for the z-component from 0.2 to 1.2 m/s. An analysis of the vertical profiles of the standard deviations of three wind velocity components for a 6-day measurement period has shown that the increase of σx and σy with altitude is well described by a power law dependence with exponent changing from 0.22 to 1.3 depending on the time of day, and σz depends linearly on the altitude. The approximation constants have been found and their errors have been estimated. The established physical regularities and the approximation constants allow the spatiotemporal dynamics of the standard deviation of three wind velocity components in the atmospheric boundary layer to be described and can be recommended for application in ABL models.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
ACCESS: Design and Sub-System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary Elizabeth; Morris, Matthew J.; McCandliss, Stephan R.; Rasucher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Pelton, Russell; Mott, D. Brent; Wen, Hiting; Foltz, Roger;
2012-01-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 -1.7 micrometer bandpass.
Measurement of the cosmic microwave background spectrum by the COBE FIRAS instrument
NASA Technical Reports Server (NTRS)
Mather, J. C.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Fixsen, D. J.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Meyer, S. S.; Noerdlinger, P. D.
1994-01-01
The cosmic microwave background radiation (CMBR) has a blackbody spectrum within 3.4 x 10(exp -8) ergs/sq cm/s/sr cm over the frequency range from 2 to 20/cm (5-0.5 mm). These measurements, derived from the Far-Infrared Absolute Spectrophotomer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite, imply stringent limits on energy release in the early universe after t approximately 1 year and redshift z approximately 3 x 10(exp 6). The deviations are less than 0.30% of the peak brightness, with an rms value of 0.01%, and the dimensionless cosmological distortion parameters are limited to the absolute value of y is less than 2.5 x 10(exp -5) and the absolute value of mu is less than 3.3 x 10(exp -4) (95% confidence level). The temperature of the CMBR is 2.726 +/- 0.010 K (95% confidence level systematic).
N2/O2/H2 Dual-Pump Cars: Validation Experiments
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Cutler, A. D.
2003-01-01
The dual-pump coherent anti-Stokes Raman spectroscopy (CARS) method is used to measure temperature and the relative species densities of N2, O2 and H2 in two experiments. Average values and root-mean-square (RMS) deviations are determined. Mean temperature measurements in a furnace containing air between 300 and 1800 K agreed with thermocouple measurements within 26 K on average, while mean mole fractions agree to within 1.6 % of the expected value. The temperature measurement standard deviation averaged 64 K while the standard deviation of the species mole fractions averaged 7.8% for O2 and 3.8% for N2, based on 200 single-shot measurements. Preliminary measurements have also been performed in a flat-flame burner for fuel-lean and fuel-rich flames. Temperature standard deviations of 77 K were measured, and the ratios of H2 to N2 and O2 to N2 respectively had standard deviations from the mean value of 12.3% and 10% of the measured ratio.
Dulku, Simon; Smith, Henry B; Antcliff, Richard J
2013-01-01
To establish whether simulated keratometry values obtained by corneal mapping (videokeratography) would provide a superior refractive outcome to those obtained by Zeiss IOLMaster (partial coherence interferometry) in routine cataract surgery. Prospective, non-randomized, single-surgeon study set at the The Royal United Hospital, Bath, UK, District General Hospital. Thirty-three patients undergoing routine cataract surgery in the absence of significant ocular comorbidity. Conventional biometry was recorded using the Zeiss IOLMaster. Postoperative refraction was calculated using the SRK/T formula and the most appropriate power of lens implanted. Preoperative keratometry values were also obtained using Humphrey Instruments Atlas Version A6 corneal mapping. Achieved refraction was compared with predicted refraction for the two methods of keratometry after the A-constants were optimized to obtain a mean arithmetic error of zero dioptres for each device. The mean absolute prediction error was 0.39 dioptres (standard deviation 0.29) for IOLMaster and 0.48 dioptres (standard deviation 0.31) for corneal mapping (P = 0.0015). Keratometry readings between the devices were highly correlated by Spearman correlation (0.97). The Bland-Altman plot demonstrated close agreement between keratometers, with a bias of 0.0079 dioptres and 95% limits of agreement of -0.48-0.49 dioptres. The IOLMaster was superior to Humphrey Atlas A6 corneal mapping in the prediction of postoperative refraction. This difference could not have been predicted from the keratometry readings alone. When comparing biometry devices, close agreement between readings should not be considered a substitute for actual postoperative refraction data. © 2012 The Authors. Clinical and Experimental Ophthalmology © 2012 Royal Australian and New Zealand College of Ophthalmologists.
Surgeon Perception of Risk and Benefit in the Decision to Operate.
Sacks, Greg D; Dawes, Aaron J; Ettner, Susan L; Brook, Robert H; Fox, Craig R; Maggard-Gibbons, Melinda; Ko, Clifford Y; Russell, Marcia M
2016-12-01
To determine how surgeons' perceptions of treatment risks and benefits influence their decisions to operate. Little is known about what makes one surgeon choose to operate on a patient and another chooses not to operate. Using an online study, we presented a national sample of surgeons (N = 767) with four detailed clinical vignettes (mesenteric ischemia, gastrointestinal bleed, bowel obstruction, appendicitis) where the best treatment option was uncertain and asked them to: (1) judge the risks (probability of serious complications) and benefits (probability of recovery) for operative and nonoperative management and (2) decide whether or not they would recommend an operation. Across all clinical vignettes, surgeons varied markedly in both their assessments of the risks and benefits of operative and nonoperative management (narrowest range 4%-100% for all four predictions across vignettes) and in their decisions to operate (49%-85%). Surgeons were less likely to operate as their perceptions of operative risk increased [absolute difference (AD) = -29.6% from 1.0 standard deviation below to 1.0 standard deviation above mean (95% confidence interval, CI: -31.6, -23.8)] and their perceptions of nonoperative benefit increased [AD = -32.6% (95% CI: -32.8,--28.9)]. Surgeons were more likely to operate as their perceptions of operative benefit increased [AD = 18.7% (95% CI: 12.6, 21.5)] and their perceptions of nonoperative risk increased [AD = 32.7% (95% CI: 28.7, 34.0)]. Differences in risk/benefit perceptions explained 39% of the observed variation in decisions to operate across the four vignettes. Given the same clinical scenarios, surgeons' perceptions of treatment risks and benefits vary and are highly predictive of their decisions to operate.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
De Luca, Stefano; Mangiulli, Tatiana; Merelli, Vera; Conforti, Federica; Velandia Palacio, Luz Andrea; Agostini, Susanna; Spinas, Enrico; Cameriere, Roberto
2016-04-01
The aim of this study is to develop a specific formula for the purpose of assessing skeletal age in a sample of Italian growing infants and children by measuring carpals and epiphyses of radio and ulna. A sample of 332 X-rays of left hand-wrist bones (130 boys and 202 girls), aged between 1 and 16 years, was analyzed retrospectively. Analysis of covariance (ANCOVA) was applied to study how sex affects the growth of the ratio Bo/Ca in the boys and girls groups. The regression model, describing age as a linear function of sex and the Bo/Ca ratio for the new Italian sample, yielded the following formula: Age = -1.7702 + 1.0088 g + 14.8166 (Bo/Ca). This model explained 83.5% of total variance (R(2) = 0.835). The median of the absolute values of residuals (observed age minus predicted age) was -0.38, with a quartile deviation of 2.01 and a standard error of estimate of 1.54. A second sample test of 204 Italian children (108 girls and 96 boys), aged between 1 and 16 years, was used to evaluate the accuracy of the specific regression model. A sample paired t-test was used to analyze the mean differences between the skeletal and chronological age. The mean error for girls is 0.00 and the estimated age is slightly underestimated in boys with a mean error of -0.30 years. The standard deviations are 0.70 years for girls and 0.78 years for boys. The obtained results indicate that there is a high relationship between estimated and chronological ages. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Calibrated Color and Albedo Maps of Mercury
NASA Astrophysics Data System (ADS)
Robinson, M. S.; Lucey, P. G.
1996-03-01
In order to determine the albedo and color of the mercurian surface, we are completing calibrated mosaics of Mariner 10 image data. A set of clear filter mosaics is being compiled in such a way as to maximize the signal-to-noise-ratio of the data and to allow for a quantitative measure of the precision of the data on a pixel-by-pixel basis. Three major imaging sequences of Mercury were acquired by Mariner 10: incoming first encounter (centered at 20S, 2E), outgoing first encounter (centered at 20N, 175E), and southern hemisphere second encounter (centered at 40S, 100E). For each sequence we are making separate mosaics for each camera (A and B) in order to have independent measurements. For each mosaic, regions of overlap from frame-to-frame are being averaged and the attendant standard deviations are being calculated. Due to the highly redundant nature of the data, each pixel in each mosaic will be an average calculated from 1-10 images. Each mosaic will have a corresponding standard deviation and n (number of measurements) map. A final mosaic will be created by averaging the six independent mosaics. This procedure lessens the effects of random noise and calibration residuals. From these data an albedo map will be produced using an improved photometric function for the Moon. A similar procedure is being followed for the lower resolution color sequences (ultraviolet, blue, orange, ultraviolet polarized). These data will be calibrated to absolute units through comparison of Mariner 10 images acquired of the Moon and Jupiter. Spectral interpretation of these new color and albedo maps will be presented with an emphasis on comparison with the Moon.
Adam, Elisabeth H; Zacharowski, Kai; Hintereder, Gudrun; Raimann, Florian; Meybohm, Patrick
2018-06-01
Blood loss due to phlebotomy leads to hospital-acquired anemia and more frequent blood transfusions that may be associated with increased risk of morbidity and mortality in critically ill patients. Multiple blood conservation strategies have been proposed in the context of patient blood management to minimize blood loss. Here, we evaluated a new small-volume sodium citrate collection tube for coagulation testing in critically ill patients. In 46 critically adult ill patients admitted to an interdisciplinary intensive care unit, we prospectively compared small-volume (1.8 mL) sodium citrate tubes with the conventional (3 mL) sodium citrate tubes. The main inclusion criterium was a proven coagulopathy (Quick < 60% and/or aPTT > 40 second) due to anticoagulation therapy or perioperative coagulopathy. In total, 92 coagulation analyses were obtained. Linear correlation analysis detected a positive relationship for 7 coagulation parameters (Prothrombin Time, r = 0.987; INR, r = 0.985; activated Partial Thromboplastin Time, r = 0.967; Thrombin Clotting Time, r = 0.969; Fibrinogen, r = 0.986; Antithrombin, r = 0.988; DDimer, r = 0.969). Bland-Altman analyses revealed an absolute mean of differences of almost zero. Ninety-five percent of data were within two standard deviations of the mean difference suggesting interchangeability. As systematic deviations between measured parameters of the two tubes were very unlikely, test results of small-volume (1.8 mL) sodium citrate tubes were equal to conventional (3 mL) sodium citrate tubes and can be considered interchangeable. Small-volume sodium citrate tubes reduced unnecessary diagnostic-related blood loss by about 40% and, therefore, should be the new standard of care for routine coagulation analysis in critically ill patients.
Brambilla, Donald J; Miller, Scott T; Adams, Robert J
2007-09-01
Children with sickle cell disease (SCD) are at elevated risk of stroke. Risk increases with blood flow velocity in selected cerebral arteries, as measured by transcranial Doppler (TCD) ultrasound, and use of TCD to screen these patients is widely recommended. Interpretation of TCD results should be based on knowledge of intra-individual variation in blood flow velocity, information not currently available for sickle cell patients. Between 1995 and 2002, 4,141 subjects, 2-16 years old, with homozygous SCD or Sbeta0-thalasemmia and no history of stroke were screened with TCD, including 2,018 subjects screened in one clinical trial (STOP), 1,816 screened in another (STOP 2), and 307 screened in an interim ancillary prospective study. The 812 subjects with >or=2 examinations<6 months apart were selected for analysis, including 242 (29.8%) subjects with normal average velocities (i.e., <170 cm/sec), 350 (43.1%) subjects with conditional velocities (i.e., 170-199 cm/sec), and 220 (27.1%) subjects with abnormal velocities (i.e., >or=200 cm/sec). The intra-subject standard deviation of TCD velocity was estimated from the difference between velocities at the first two interpretable examinations on each subject. An intra-subject standard deviation of 14.9 cm/sec was obtained. Seven (0.9%) subjects had unusually large and unexplained differences between velocities at the two examinations (range of absolute differences: 69-112 cm/sec). While stroke risk is well demonstrated to increase with increasingly abnormal TCD velocity, given the relatively large intra-subject variability, one TCD examination is generally not sufficient to characterize stroke risk in this patient population. Copyright (c) 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Wang, Lei; Moltz, Jan H.; Bornemann, Lars; Hahn, Horst K.
2012-03-01
Precise size measurement of enlarged lymph nodes is a significant indicator for diagnosing malignancy, follow-up and therapy monitoring of cancer diseases. The presence of diverse sizes and shapes, inhomogeneous enhancement and the adjacency to neighboring structures with similar intensities, make the segmentation task challenging. We present a semi-automatic approach requiring minimal user interactions to fast and robustly segment the enlarged lymph nodes. First, a stroke approximating the largest diameter of a specific lymph node is drawn manually from which a volume of interest (VOI) is determined. Second, Based on the statistical analysis of the intensities on the dilated stroke area, a region growing procedure is utilized within the VOI to create an initial segmentation of the target lymph node. Third, a rotatable spiral-scanning technique is proposed to resample the 3D boundary surface of the lymph node to a 2D boundary contour in a transformed polar image. The boundary contour is found by seeking the optimal path in 2D polar image with dynamic programming algorithm and eventually transformed back to 3D. Ultimately, the boundary surface of the lymph node is determined using an interpolation scheme followed by post-processing steps. To test the robustness and efficiency of our method, a quantitative evaluation was conducted with a dataset of 315 lymph nodes acquired from 79 patients with lymphoma and melanoma. Compared to the reference segmentations, an average Dice coefficient of 0.88 with a standard deviation of 0.08, and an average absolute surface distance of 0.54mm with a standard deviation of 0.48mm, were achieved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, collected split surface water samples with Nuclear Fuel Services (NFS) representatives on November 15, 2012. Representatives from the U.S. Nuclear Regulatory Commission and Tennessee Department of Environment and Conservation were also in attendance. Samples were collected at four surface water stations, as required in the approved Request for Technical Assistance number 11-018. These stations included Nolichucky River upstream (NRU), Nolichucky River downstream (NRD), Martin Creek upstream (MCU), and Martin Creek downstream (MCD). Both ORAU and NFS performed gross alpha and gross beta analyses, andmore » the results are compared using the duplicate error ratio (DER), also known as the normalized absolute difference. A DER {<=} 3 indicates that, at a 99% confidence interval, split sample results do not differ significantly when compared to their respective one standard deviation (sigma) uncertainty (ANSI N42.22). The NFS split sample report does not specify the confidence level of reported uncertainties (NFS 2012). Therefore, standard two sigma reporting is assumed and uncertainty values were divided by 1.96. In conclusion, all DER values were less than 3 and results are consistent with low (e.g., background) concentrations.« less
PACS photometer calibration block analysis
NASA Astrophysics Data System (ADS)
Moór, A.; Müller, T. G.; Kiss, C.; Balog, Z.; Billot, N.; Marton, G.
2014-07-01
The absolute stability of the PACS bolometer response over the entire mission lifetime without applying any corrections is about 0.5 % (standard deviation) or about 8 % peak-to-peak. This fantastic stability allows us to calibrate all scientific measurements by a fixed and time-independent response file, without using any information from the PACS internal calibration sources. However, the analysis of calibration block observations revealed clear correlations of the internal source signals with the evaporator temperature and a signal drift during the first half hour after the cooler recycling. These effects are small, but can be seen in repeated measurements of standard stars. From our analysis we established corrections for both effects which push the stability of the PACS bolometer response to about 0.2 % (stdev) or 2 % in the blue, 3 % in the green and 5 % in the red channel (peak-to-peak). After both corrections we still see a correlation of the signals with PACS FPU temperatures, possibly caused by parasitic heat influences via the Kevlar wires which connect the bolometers with the PACS Focal Plane Unit. No aging effect or degradation of the photometric system during the mission lifetime has been found.
WE-FG-207B-08: Dual-Energy CT Iodine Accuracy Across Vendors and Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, M; Wood, C; Cody, D
Purpose: Although a major benefit of dual-energy CT is its quantitative capabilities, it is critical to understand how results vary by scanner manufacturer and/or model before making clinical patient management decisions. Each manufacturer utilizes a specific dual-energy CT approach; cross-calibration may be required for facilities with more than one dual-energy CT scanner type. Methods: A solid dual-energy quality control phantom (Gammex, Inc.; Appleton, WI) representing a large body cross-section containing three Iodine inserts (2mg/ml, 5mg/ml, 15 mg/ml) was scanned on these CT systems: GE HD-750 (80/140kVp), prototype GE Revolution CT with GSI (80/140kVp), Siemens Flash (80/140kVp and 100/140kVp), and Philipsmore » IQon (120kVp and 140kVp). Iodine content was measured in units of concentration (mg/ml) from a single 5mm-thick central image. Three to five acquisitions were performed on each scanner platform in order to compute standard deviation. Scan acquisitions were approximately dose-matched (∼25mGy CTDIvol) and image parameters were as consistent as possible (thickness, kernel, no noise reduction applied). Results: Iodine measurement error ranges were −0.24-0.16 mg/ml for the 2mg/ml insert (−12.0 − 8.0%), −0.28–0.26 mg/ml for the 5mg/ml insert (−5.6 − 5.2%), and −1.16−0.99 mg/ml for the 15mg/ml insert (−7.7 − 6.6%). Standard deviations ranged from 0 to 0.19 mg/ml for the repeated acquisitions from each scanner. The average iodine measurement error and standard deviation across all systems and inserts was −0.21 ± 0.48 mg/ml (−1.5 ± 6.48%). The largest absolute measurement error was found in the 15mg/ml iodine insert. Conclusion: There was generally good agreement in Iodine quantification across 3 dual-energy CT manufacturers and 4 scanner models. This was unexpected given the widely different underlying dual-energy CT mechanisms employed. Future work will include additional scanner platforms, independent verification of the Iodine insert standard concentrations (especially the 15 mg/ml insert), and how much measurement variability can be clinically tolerated. This research has been supported by funds from Dr. William Murphy, Jr., the John S. Dunn, Sr. Distinguished Chair in Diagnostic Imaging at MD Anderson Cancer Center.« less
ERIC Educational Resources Information Center
Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin
2007-01-01
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
NASA Astrophysics Data System (ADS)
Krajíček, Zdeněk; Bergoglio, Mercede; Jousten, Karl; Otal, Pierre; Sabuga, Wladimir; Saxholm, Sari; Pražák, Dominik; Vičar, Martin
2014-01-01
This report describes a EURAMET comparison of five European National Metrology Institutes in low gauge and absolute pressure in gas (nitrogen), denoted as EURAMET.M.P-K4.2010. Its main intention is to state equivalence of the pressure standards, in particular those based on the technology of force-balanced piston gauges such as e.g. FRS by Furness Controls, UK and FPG8601 by DHI-Fluke, USA. It covers the range from 1 Pa to 15 kPa, both gauge and absolute. The comparison in absolute mode serves as a EURAMET Key Comparison which can be linked to CCM.P-K4 and CCM.P-K2 via PTB. The comparison in gauge mode is a supplementary comparison. The comparison was carried out from September 2008 till October 2012. The participating laboratories were the following: CMI, INRIM, LNE, MIKES, PTB-Berlin (absolute pressure 1 kPa and below) and PTB-Braunschweig (absolute pressure 1 kPa and above and gauge pressure). CMI was the pilot laboratory and provided a transfer standard for the comparison. This transfer standard was also the laboratory standard of CMI at the same time, which resulted in a unique and logistically difficult star comparison. Both in gauge and absolute pressures all the participating institutes successfully proved their equivalence with respect to the reference value and all also proved mutual bilateral equivalences in all the points. All the participating laboratories are also equivalent with the reference values of CCM.P-K4 and CCM.P-K2 in the relevant points. The comparison also proved the ability of FPG8601 to serve as a transfer standard. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Siebenhaar, Markus; Küllmer, Kai; Fernandes, Nuno Miguel de Barros; Hüllen, Volker; Hopf, Carsten
2015-09-01
Desorption electrospray ionization (DESI) mass spectrometry is an emerging technology for direct therapeutic drug monitoring in dried blood spots (DBS). Current DBS methods require manual application of small molecules as internal standards for absolute drug quantification. With industrial standardization in mind, we superseded the manual addition of standard and built a three-layer setup for robust quantification of salicylic acid directly from DBS. We combined a dioctyl sodium sulfosuccinate weave facilitating sample spreading with a cellulose layer for addition of isotope-labeled salicylic acid as internal standard and a filter paper for analysis of the standard-containing sample by DESI-MS. Using this setup, we developed a quantification method for salicylic acid from whole blood with a validated linear curve range from 10 to 2000 mg/L, a relative standard deviation (RSD%) ≤14%, and determination coefficients of 0.997. The limit of detection (LOD) was 8 mg/L and the lower limit of quantification (LLOQ) was 10 mg/L. Recovery rates in method verification by LC-MS/MS were 97 to 101% for blinded samples. Most importantly, a study in healthy volunteers after administration of a single dose of Aspirin provides evidence to suggest that the three-layer setup may enable individual pharmacokinetic and endpoint testing following blood collection by finger pricking by patients at home. Taken together, our data suggests that DBS-based quantification of drugs by DESI-MS on pre-manufactured three-layer cartridges may be a promising approach for future near-patient therapeutic drug monitoring.
40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...
40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...
40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...
40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...
40 CFR 63.705 - Performance test methods and procedures to determine initial compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... per gram-mole. Pi = Barometric pressure at the time of sample analysis, millimeters mercury absolute. 760 = Reference or standard pressure, millimeters mercury absolute. 293 = Reference or standard...: ER15DE94.005 (i) The value of RSi is zero unless the owner or operator submits the following information to...
Calderón-Celis, Francisco; Sanz-Medel, Alfredo; Encinar, Jorge Ruiz
2018-01-23
We present a novel and highly sensitive ICP-MS approach for absolute quantification of all important target biomolecule containing P, S, Se, As, Br, and/or I (e.g., proteins and phosphoproteins, metabolites, pesticides, drugs), under the same simple instrumental conditions and without requiring any specific and/or isotopically enriched standard.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Variance computations for functional of absolute risk estimates
Pfeiffer, R.M.; Petracci, E.
2011-01-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476
The gap technique does not rotate the femur parallel to the epicondylar axis.
Matziolis, Georg; Boenicke, Hinrich; Pfiel, Sascha; Wassilew, Georgi; Perka, Carsten
2011-02-01
In the analysis of painful total knee replacements, the surgical epicondylar axis (SEA) has become established as a standard in the diagnosis of femoral component rotation. It remains unclear whether the gap technique widely used to determine femoral rotation, when applied correctly, results in a rotation parallel to the SEA. In this prospective study, 69 patients (69 joints) were included who received a navigated bicondylar surface replacement due to primary arthritis of the knee joint. In 67 cases in which a perfect soft-tissue balancing of the extension gap (<1° asymmetry) was achieved, the flexion gap and the rotation of the femoral component necessary for its symmetry was determined and documented. The femoral component was implanted additionally taking into account the posterior condylar axis and the Whiteside's line. Postoperatively, the rotation of the femoral component to the SEA was determined and this was used to calculate the angle between a femur implanted according to the gap technique and the SEA. If the gap technique had been used consistently, it would have resulted in a deviation of the femoral components by -0.6° ± 2.9° (-7.4°-5.9°) from the SEA. The absolute deviation would have been 2.4° ± 1.8°, with a range between 0.2° and 7.4°. Even if the extension gap is perfectly balanced, the gap technique does not lead to a parallel alignment of the femoral component to the SEA. Since the clinical results of this technique are equivalent to those of the femur first technique in the literature, an evaluation of this deviation as a malalignment must be considered critically.
Error analysis regarding the calculation of nonlinear force-free field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.
2012-02-01
Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.
Matrix Summaries Improve Research Reports: Secondary Analyses Using Published Literature
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Thompson, Bruce
2009-01-01
Correlation matrices and standard deviations are the building blocks of many of the commonly conducted analyses in published research, and AERA and APA reporting standards recommend their inclusion when reporting research results. The authors argue that the inclusion of correlation/covariance matrices, standard deviations, and means can enhance…
30 CFR 74.8 - Measurement, accuracy, and reliability requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... concentration, as defined by the relative standard deviation of the distribution of measurements. The relative standard deviation shall be less than 0.1275 without bias for both full-shift measurements of 8 hours or... Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, Virginia 22209-3939...
The effects of auditory stimulation with music on heart rate variability in healthy women.
Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de
2013-07-01
There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.
The effects of auditory stimulation with music on heart rate variability in healthy women
Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos
2013-01-01
OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660
USL/DBMS NASA/PC R and D project C programming standards
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Moreau, Dennis R.
1984-01-01
A set of programming standards intended to promote reliability, readability, and portability of C programs written for PC research and development projects is established. These standards must be adhered to except where reasons for deviation are clearly identified and approved by the PC team. Any approved deviation from these standards must also be clearly documented in the pertinent source code.
SU-F-T-552: A One-Year Evaluation of the QABeamChecker+ for Use with the CyberKnife System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gersh, J; Spectrum Medical Physics, LLC, Greenville, SC
Purpose: By attaching an adapter plate with fiducial markers to the QA BeamChecker+ (Standard Imaging, Inc., Middleton, WI), the output of the CyberKnife can be accurately, efficiently, and consistently evaluated. The adapter plate, known as the Cutting Board, allows for automated alignment of the QABC+ using the CK’s stereoscopic kV image-based treatment localization system (TLS). Described herein is an evaluation of the system following a year of clinical utilization. Methods: Based on a CT scan of the QABC+ and CB, a treatment plan is generated which delivers a beam to each of the 5 plane-parallel ionization chambers. Following absolute calibrationmore » of the CK, the QA plan is delivered, and baseline measurements are acquired (and automatically corrected for temperature and pressure). This test was performed at the beginning of each treatment day for a year. A calibration evaluation (using a water-equivalent slab and short thimble chamber) is performed every four weeks, or whenever the QABC+ detects a deviation of more than 1.0%. Results: During baseline evaluation, repeat measurements (n=10) were performed, with an average output of 0.25% with an SD of 0.11%. As a test of the reposition of the QABC+ and CB, ten additional measurements were performed where between each acquisition, the entire system was removed and re-positioned using the TLS. The average output deviation was 0.30% with a SD of 0.13%. During the course of the year, 187 QABC+ measurements and 13 slab-based measurements were performed. The output measurements of the QABC+ correlated well with slab-based measurements (R2=0.909). Conclusion: By using the QABC+ and CB, daily output was evaluated accurately, efficiently, and consistently. From setup to break-down (including analysis), this test required 5 minutes instead of approximately 15 using traditional techniques (collimator-mounted ionization chambers). Additionally, by automatically saving resultant output deviation to a database, trend analysis was simplified. Spectrum Medical Physics, LLC of Greenville, SC has a consulting contract with Standard Imaging of Middleton, WI.« less
Water Quality Assessment using Satellite Remote Sensing
NASA Astrophysics Data System (ADS)
Haque, Saad Ul
2016-07-01
The two main global issues related to water are its declining quality and quantity. Population growth, industrialization, increase in agriculture land and urbanization are the main causes upon which the inland water bodies are confronted with the increasing water demand. The quality of surface water has also been degraded in many countries over the past few decades due to the inputs of nutrients and sediments especially in the lakes and reservoirs. Since water is essential for not only meeting the human needs but also to maintain natural ecosystem health and integrity, there are efforts worldwide to assess and restore quality of surface waters. Remote sensing techniques provide a tool for continuous water quality information in order to identify and minimize sources of pollutants that are harmful for human and aquatic life. The proposed methodology is focused on assessing quality of water at selected lakes in Pakistan (Sindh); namely, HUBDAM, KEENJHAR LAKE, HALEEJI and HADEERO. These lakes are drinking water sources for several major cities of Pakistan including Karachi. Satellite imagery of Landsat 7 (ETM+) is used to identify the variation in water quality of these lakes in terms of their optical properties. All bands of Landsat 7 (ETM+) image are analyzed to select only those that may be correlated with some water quality parameters (e.g. suspended solids, chlorophyll a). The Optimum Index Factor (OIF) developed by Chavez et al. (1982) is used for selection of the optimum combination of bands. The OIF is calculated by dividing the sum of standard deviations of any three bands with the sum of their respective correlation coefficients (absolute values). It is assumed that the band with the higher standard deviation contains the higher amount of 'information' than other bands. Therefore, OIF values are ranked and three bands with the highest OIF are selected for the visual interpretation. A color composite image is created using these three bands. The water quality of these lakes are assessed by comparing their reflectance values with the spectral signatures of distilled water. The layout water quality maps of these lakes are prepared in terms of these deviations. The results of the study can be utilized for preliminary water quality monitoring of the selected lakes.
Ran, Yang; Su, Rongtao; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-10
We present a new quantitative index of standard deviation to measure the homogeneity of spectral lines in a fiber amplifier system so as to find the relation between the stimulated Brillouin scattering (SBS) threshold and the homogeneity of the corresponding spectral lines. A theoretical model is built and a simulation framework has been established to estimate the SBS threshold when input spectra with different homogeneities are set. In our experiment, by setting the phase modulation voltage to a constant value and the modulation frequency to different values, spectral lines with different homogeneities can be obtained. The experimental results show that the SBS threshold increases negatively with the standard deviation of the modulated spectrum, which is in good agreement with the theoretical results. When the phase modulation voltage is confined to 10 V and the modulation frequency is set to 80 MHz, the standard deviation of the modulated spectrum equals 0.0051, which is the lowest value in our experiment. Thus, at this time, the highest SBS threshold has been achieved. This standard deviation can be a good quantitative index in evaluating the power scaling potential in a fiber amplifier system, which is also a design guideline in suppressing the SBS to a better degree.
SU-E-T-120: Dosimetric Characteristics Study of NanoDotâ,,¢ for In-Vivo Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, A; Wasaye, A; Gohar, R
Purpose: The purpose of the study was to analyze the dosimetric characteristics (energy dependence, reproducibility and dose linearity) of nanoDot™ optically stimulated luminescence dosimeters (OSLDs) and validate their potential use during in-vivo dosimetry, specifically TBI. The manufacturer stated accuracy is ±10% for standard nanoDot™. Methods: At AKUH, the InLight microStar OSL dosimetry system for patient in-vivo dosimetry is in use since 2012. Twenty-five standard nanoDot™ were used in the analysis. Sensitivity and reproducibility was tested in the first part with 6MV and 18 MV Varian x-ray beams. Each OSLD was irradiated to 100cGy dose at nominal SSD (100 cm). Allmore » the OSLDs were read 3 times for average reading. Dose linearity and calibration were also performed with same beams in common clinical dose range of 0 - 500 cGy. In addition, verification of TBI absolute dose at extended SSD (500cm) was also performed. Results: The reproducibility observed with the OSLD was better than the manufacturer stated limits. Measured doses vary less than ±2% in 19(76%) OSLDs, whereas less than ±3% in 6(24%) OSLDs. Their sensitivity was approximately 525 counts per cGy. Better agreement was observed between measurements, with a standard deviation of 1.8%. A linear dose response was observed with OSLDs for both 6 and 18MV beams in 0 - 500 cGy dose range. TBI measured doses at 500 cm SSD were also confirmed to be within ±0.5% and ±1.3% of the ion chamber measured doses for 6 and 18MV beams respectively. Conclusion: The dosimetric results demonstrate that nanoDot™ can be potentially used for in-vivo dosimetry verification in various clinical situations, with a high degree of accuracy and precision. In addition OSLDs exhibit better dose reproducibility with standard deviation of 1.8%. There was no significant difference in their response to 6 and 18MV beams. The dose response was also linear.« less
Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA
NASA Astrophysics Data System (ADS)
Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.
2018-04-01
This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.
MetaMQAP: a meta-server for the quality assessment of protein models.
Pawlowski, Marcin; Gajda, Michal J; Matlak, Ryszard; Bujnicki, Janusz M
2008-09-29
Computational models of protein structure are usually inaccurate and exhibit significant deviations from the true structure. The utility of models depends on the degree of these deviations. A number of predictive methods have been developed to discriminate between the globally incorrect and approximately correct models. However, only a few methods predict correctness of different parts of computational models. Several Model Quality Assessment Programs (MQAPs) have been developed to detect local inaccuracies in unrefined crystallographic models, but it is not known if they are useful for computational models, which usually exhibit different and much more severe errors. The ability to identify local errors in models was tested for eight MQAPs: VERIFY3D, PROSA, BALA, ANOLEA, PROVE, TUNE, REFINER, PROQRES on 8251 models from the CASP-5 and CASP-6 experiments, by calculating the Spearman's rank correlation coefficients between per-residue scores of these methods and local deviations between C-alpha atoms in the models vs. experimental structures. As a reference, we calculated the value of correlation between the local deviations and trivial features that can be calculated for each residue directly from the models, i.e. solvent accessibility, depth in the structure, and the number of local and non-local neighbours. We found that absolute correlations of scores returned by the MQAPs and local deviations were poor for all methods. In addition, scores of PROQRES and several other MQAPs strongly correlate with 'trivial' features. Therefore, we developed MetaMQAP, a meta-predictor based on a multivariate regression model, which uses scores of the above-mentioned methods, but in which trivial parameters are controlled. MetaMQAP predicts the absolute deviation (in Angströms) of individual C-alpha atoms between the model and the unknown true structure as well as global deviations (expressed as root mean square deviation and GDT_TS scores). Local model accuracy predicted by MetaMQAP shows an impressive correlation coefficient of 0.7 with true deviations from native structures, a significant improvement over all constituent primary MQAP scores. The global MetaMQAP score is correlated with model GDT_TS on the level of 0.89. Finally, we compared our method with the MQAPs that scored best in the 7th edition of CASP, using CASP7 server models (not included in the MetaMQAP training set) as the test data. In our benchmark, MetaMQAP is outperformed only by PCONS6 and method QA_556 - methods that require comparison of multiple alternative models and score each of them depending on its similarity to other models. MetaMQAP is however the best among methods capable of evaluating just single models. We implemented the MetaMQAP as a web server available for free use by all academic users at the URL https://genesilico.pl/toolkit/
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Analysis of standard reference materials by absolute INAA
NASA Astrophysics Data System (ADS)
Heft, R. E.; Koszykowski, R. F.
1981-07-01
Three standard reference materials: flyash, soil, and ASI 4340 steel, are analyzed by a method of absolute instrumental neutron activation analysis. Two different light water pool-type reactors were used to produce equivalent analytical results even though the epithermal to thermal flux ratio in one reactor was higher than that in the other by a factor of two.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-08
... conventional rounding rules, emission totals listed in Tables 1 and 2 may not reflect the absolute mathematical... absolute mathematical totals. As shown in Table 2 above, the Nashville Area is projected to steadily...-based standard, the air quality design value is simply the standard- related test statistic. Thus, for...
NIST Stars: Absolute Spectrophotometric Calibration of Vega and Sirius
NASA Astrophysics Data System (ADS)
Deustua, Susana; Woodward, John T.; Rice, Joseph P.; Brown, Steven W.; Maxwell, Stephen E.; Alberding, Brian G.; Lykke, Keith R.
2018-01-01
Absolute flux calibration of standard stars, traceable to SI (International System of Units) standards, is essential for 21st century astrophysics. Dark energy investigations that rely on observations of Type Ia supernovae and precise photometric redshifts of weakly lensed galaxies require a minimum accuracy of 0.5 % in the absolute color calibration. Studies that aim to address fundamental stellar astrophysics also benefit. In the era of large telescopes and all sky surveys well-calibrated standard stars that do not saturate and that are available over the whole sky are needed. Significant effort has been expended to obtain absolute measurements of the fundamental standards Vega and Sirius (and other stars) in the visible and near infrared, achieving total uncertainties between1% and 3%, depending on wavelength, that do not meet the needed accuracy. The NIST Stars program aims to determine the top-of-the-atmosphere absolute spectral irradiance of bright stars to an uncertainty less than 1% from a ground-based observatory. NIST Stars has developed a novel, fully SI-traceable laboratory calibration strategy that will enable achieving the desired accuracy. This strategy has two key components. The first is the SI-traceable calibration of the entire instrument system, and the second is the repeated spectroscopic measurement of the target star throughout the night. We will describe our experimental strategy, present preliminary results for Vega and Sirius and an end-to-end uncertainty budget
Utrillas, María P; Marín, María J; Esteve, Anna R; Estellés, Victor; Tena, Fernando; Cañada, Javier; Martínez-Lozano, José A
2009-01-01
Values of measured and modeled diffuse UV erythemal irradiance (UVER) for all sky conditions are compared on planes inclined at 40 degrees and oriented north, south, east and west. The models used for simulating diffuse UVER are of the geometric-type, mainly the Isotropic, Klucher, Hay, Muneer, Reindl and Schauberger models. To analyze the precision of the models, some statistical estimators were used such as root mean square deviation, mean absolute deviation and mean bias deviation. It was seen that all the analyzed models reproduce adequately the diffuse UVER on the south-facing plane, with greater discrepancies for the other inclined planes. When the models are applied to cloud-free conditions, the errors obtained are higher because the anisotropy of the sky dome acquires more importance and the models do not provide the estimation of diffuse UVER accurately.
Vapor-liquid equilibria for an R134a/lubricant mixture: Measurements and equation-of-state modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, M.L.; Holcomb, C.D.; Outcalt, S.L.
2000-07-01
The authors measured bubble point pressures and coexisting liquid densities for two mixtures of R-134a and a polyolester (POE) lubricant. The mass fraction of the lubricant was approximately 9% and 12%, and the temperature ranged from 280 K to 355 K. The authors used the Elliott, Suresh, and Donohue (ESD) equation of state to model the bubble point pressure data. The bubble point pressures were represented with an average absolute deviation of 2.5%. A binary interaction parameter reduced the deviation to 1.4%. The authors also applied the ESD model to other R-134a/POE lubricant data in the literature. As the concentrationmore » of the lubricant increased, the performance of the model deteriorated markedly. However, the use of a single binary interaction parameter reduced the deviations significantly.« less
75 FR 67093 - Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-01
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2010-P-0517] Iceberg Water Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... from the requirements of the standards of identity issued under section 401 of the Federal Food, Drug...
78 FR 2273 - Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2012-P-1189] Canned Tuna Deviating From Identity Standard; Temporary Permit for Market Testing AGENCY: Food and Drug... interstate shipment of experimental packs of food varying from the requirements of standards of identity...
Upgraded FAA Airfield Capacity Model. Volume 2. Technical Description of Revisions
1981-02-01
the threshold t k a the time at which departure k is released FIGURE 3-1 TIME AXIS DIAGRAM OF SINGLE RUNWAY OPERATIONS 3-2 J"- SIGMAR the standard...standard deviation of the interarrival time. SIGMAR - the standard deviation of the arrival runway occupancy time. A-5 SINGLE - program subroutine for
NASA Astrophysics Data System (ADS)
Cornejo, Juan Carlos
The Standard Model has been a theory with the greatest success in describing the fundamental interactions of particles. As of the writing of this dissertation, the Standard Model has not been shown to make a false prediction. However, the limitations of the Standard Model have long been suspected by its lack of a description of gravity, nor dark matter. Its largest challenge to date, has been the observation of neutrino oscillations, and the implication that they may not be massless, as required by the Standard Model. The growing consensus is that the Standard Model is simply a lower energy effective field theory, and that new physics lies at much higher energies. The Qweak Experiment is testing the Electroweak theory of the Standard Model by making a precise determination of the weak charge of the proton (Qpw). Any signs of "new physics" will appear as a deviation to the Standard Model prediction. The weak charge is determined via a precise measurement of the parity-violating asymmetry of the electron-proton interaction via elastic scattering of a longitudinally polarized electron beam of an un-polarized proton target. The experiment required that the electron beam polarization be measured to an absolute uncertainty of 1 %. At this level the electron beam polarization was projected to contribute the single largest experimental uncertainty to the parity-violating asymmetry measurement. This dissertation will detail the use of Compton scattering to determine the electron beam polarization via the detection of the scattered photon. I will conclude the remainder of the dissertation with an independent analysis of the blinded Qweak.
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
An absolute photometric system at 10 and 20 microns
NASA Technical Reports Server (NTRS)
Rieke, G. H.; Lebofsky, M. J.; Low, F. J.
1985-01-01
Two new direct calibrations at 10 and 20 microns are presented in which terrestrial flux standards are referred to infrared standard stars. These measurements give both good agreement and higher accuracy when compared with previous direct calibrations. As a result, the absolute calibrations at 10 and 20 microns have now been determined with accuracies of 3 and 8 percent, respectively. A variety of absolute calibrations based on extrapolation of stellar spectra from the visible to 10 microns are reviewed. Current atmospheric models of A-type stars underestimate their fluxes by about 10 percent at 10 microns, whereas models of solar-type stars agree well with the direct calibrations. The calibration at 20 microns can probably be determined to about 5 percent by extrapolation from the more accurate result at 10 microns. The photometric system at 10 and 20 microns is updated to reflect the new absolute calibration, to base its zero point directly on the colors of A0 stars, and to improve the accuracy in the comparison of the standard stars.
A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center
2009-06-01
use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for
Observations on the method of determining the velocity of airships
NASA Technical Reports Server (NTRS)
Volterra, Vito
1921-01-01
To obtain the absolute velocity of an airship by knowing the speed at which two routes are covered, we have only to determine the geographical direction of the routes which we locate from a map, and the angles of routes as given by the compass, after correcting for the variation (the algebraical sum of the local magnetic declination and the deviation).
Modelling PET radionuclide production in tissue and external targets using Geant4
NASA Astrophysics Data System (ADS)
Amin, T.; Infantino, A.; Lindsay, C.; Barlow, R.; Hoehr, C.
2017-07-01
The Proton Therapy Facility in TRIUMF provides 74 MeV protons extracted from a 500 MeV H- cyclotron for ocular melanoma treatments. During treatment, positron emitting radionuclides such as 1C, 15O and 13N are produced in patient tissue. Using PET scanners, the isotopic activity distribution can be measured for in-vivo range verification. A second cyclotron, the TR13, provides 13 MeV protons onto liquid targets for the production of PET radionuclides such as 18F, 13N or 68Ga, for medical applications. The aim of this work was to validate Geant4 against FLUKA and experimental measurements for production of the above-mentioned isotopes using the two cyclotrons. The results show variable degrees of agreement. For proton therapy, the proton-range agreement was within 2 mm for 11C activity, whereas 13N disagreed. For liquid targets at the TR13 the average absolute deviation ratio between FLUKA and experiment was 1.9±2.7, whereas the average absolute deviation ratio between Geant4 and experiment was 0. 6±0.4. This is due to the uncertainties present in experimentally determined reaction cross sections.
Maeda, Aya; Sakoguchi, Yoko; Miyawaki, Shouichi
2013-09-01
This report describes the treatment of a 20-year-old woman with a dental midline deviation and 7 congenitally missing premolars. She had retained a maxillary right deciduous canine and 4 deciduous second molars, and she had an impacted maxillary right third molar. The maxillary right deciduous second molar was extracted, and the space was nearly closed by mesial movement of the maxillary right molars using an edgewise appliance and a miniscrew for absolute anchorage. The miniscrew was removed, and the extraction space of the maxillary right deciduous canine was closed, correcting the dental midline deviation. After the mesial movement of the maxillary right molars, the impacted right third molar was aligned. To prevent root resorption, the retained left deciduous second molars were not aligned by the edgewise appliance. The occlusal contact area and the maximum occlusal force increased over the 2 years of retention. The miniscrew was useful for absolute anchorage for unilateral mesial movement of the maxillary molars and for the creation of eruption space and alignment of the impacted third molar in a patient with oligodontia. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Profiling the Use of Dietary Supplements by Brazilian Physical Education Professionals.
Viana, Ricardo Borges; Silva, Maria Sebastiana; da Silva, Wellington Fernando; Campos, Mário Hebling; Andrade, Marília Dos Santos; Vancini, Rodrigo Luiz; Andre Barbosa de Lira, Claudio
2018-11-02
A survey was designed to examine the use of dietary supplements by Brazilian physical education professionals. The study included 131 Brazilian physical education professionals (83 men and 48 women). A descriptive statistical analysis was performed (mean, standard deviation, and absolute and relative frequencies). A chi-square test was applied to evaluate differences in use of dietary supplements according to particular variables of interest (p < .05). Forty-nine percent of respondents used dietary supplements. Approximately 59% of dietary supplement users took two or more kinds of supplements. Among users of supplements, men professionals (73%) consumed more dietary supplements than women (27%). The most-consumed dietary supplement was whey protein (80%). The results showed a higher use of dietary supplements by men. The most-consumed supplements were rich in protein. The consumption of dietary supplements by almost half of the participants in this study suggests that participants did not consider their dietary needs to be met by normal diet alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tridon, F.; Battaglia, A.; Luke, E.
A recently developed technique retrieving the binned raindrop size distributions (DSDs) and air state parameters from ground-based K a and W-band radars Doppler spectra profiles is improved and applied to a typical midlatitude rain event. The retrievals are thoroughly validated against DSD observations of a 2D video disdrometer and independent X-band observations. Here for this case-study, profiles of rain rate, R, mean volume diameter and concentration parameter are retrieved, with low bias and standard deviations. In light rain (0.1 < R < 1 mm h -1), the radar reflectivities must be calibrated with a collocated disdrometer which introduces random errorsmore » due to sampling mismatch between the two instruments. The best performances are obtained in moderate rain (1 < R < 20 mm h -1) where the retrieval is providing self-consistent estimates of the absolute calibration and of the attenuation caused by antenna or radome wetness for both radars.« less
Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples
Chen, Andrew; Chen, Chiachung
2013-01-01
Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627
Some common indices of group diversity: upper boundaries.
Solanas, Antonio; Selvam, Rejina M; Navarro, José; Leiva, David
2012-12-01
Workgroup diversity can be conceptualized as variety, separation, or disparity. Thus, the proper operationalization of diversity depends on how a diversity dimension has been defined. Analytically, the minimal diversity must be obtained when there are no differences on an attribute among the members of a group, however maximal diversity has a different shape for each conceptualization of diversity. Previous work on diversity indexes indicated maximum values for variety (e.g., Blau's index and Teachman's index), separation (e.g., standard deviation and mean Euclidean distance), and disparity (e.g., coefficient of variation and the Gini coefficient of concentration), although these maximum values are not valid for all group characteristics (i.e., group size and group size parity) and attribute scales (i.e., number of categories). We demonstrate analytically appropriate upper boundaries for conditional diversity determined by some specific group characteristics, avoiding the bias related to absolute diversity. This will allow applied researchers to make better interpretations regarding the relationship between group diversity and group outcomes.
NASA Astrophysics Data System (ADS)
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Accuracy assessment of TanDEM-X IDEM using airborne LiDAR on the area of Poland
NASA Astrophysics Data System (ADS)
Woroszkiewicz, Małgorzata; Ewiak, Ireneusz; Lulkowska, Paulina
2017-06-01
The TerraSAR-X add-on for Digital Elevation Measurement (TanDEM-X) mission launched in 2010 is another programme - after the Shuttle Radar Topography Mission (SRTM) in 2000 - that uses space-borne radar interferometry to build a global digital surface model. This article presents the accuracy assessment of the TanDEM-X intermediate Digital Elevation Model (IDEM) provided by the German Aerospace Center (DLR) under the project "Accuracy assessment of a Digital Elevation Model based on TanDEM-X data" for the southwestern territory of Poland. The study area included: open terrain, urban terrain and forested terrain. Based on a set of 17,498 reference points acquired by airborne laser scanning, the mean errors of average heights and standard deviations were calculated for areas with a terrain slope below 2 degrees, between 2 and 6 degrees and above 6 degrees. The absolute accuracy of the IDEM data for the analysed area, expressed as a root mean square error (Total RMSE), was 0.77 m.
Line-blanketed model stellar atmospheres applied to Sirius. Ph.D. Thesis - Maryland Univ.
NASA Technical Reports Server (NTRS)
Fowler, J. W.
1972-01-01
The primary goal of this analysis is to determine whether the effects of atomic bound-bound transitions on stellar atmospheric structure can be represented well in models. The investigation is based on an approach which is called the method of artificial absorption edges. The method is described, developed, tested, and applied to the problem of fitting a model stellar atmosphere to Sirius. It is shown that the main features of the entire observed spectrum of Sirius can be reproduced to within the observational uncertainty by a blanketed flux-constant model with T sub eff = 9700 K and Log g = 4.26. The profile of H sub gamma is reproduced completely within the standard deviations of the measurements except near line center, where non-LTE effects are expected to be significant. The equivalent width of H sub gamma, the Paschen slope, the Balmer jump, and the absolute flux at 5550 A all agree with the observed values.
Lipscomb, K
1980-01-01
Biplane cineradiography is a potentially powerful tool for precise measurement of intracardiac dimensions. The most systematic approach to these measurements is the creation of a three-dimensional coordinate system within the x-ray field. Using this system, interpoint distances, such as between radiopaque clips or coronary artery bifurcations, can be calculated by use of the Pythagoras theorem. Alternatively, calibration factors can be calculated in order to determine the absolute dimensions of a structure, such as a ventricle or coronary artery. However, cineradiography has two problems that have precluded widespread use of the system. These problems are pincushion distortion and variable image magnification. In this paper, methodology to quantitate and compensate for these variables is presented. The method uses radiopaque beads permanently mounted in the x-ray field. The position of the bead images on the x-ray film determine the compensation factors. Using this system, measurements are made with a standard deviation of approximately 1% of the true value.
Ultraspectral sounding retrieval error budget and estimation
NASA Astrophysics Data System (ADS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping
2011-11-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
Ultraspectral Sounding Retrieval Error Budget and Estimation
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2011-01-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..
Zupan, Michael F; Arata, Alan W; Dawson, Letitia H; Wile, Alfred L; Payn, Tamara L; Hannon, Megan E
2009-12-01
The Wingate Anaerobic Test (WAnT) has been established as an effective tool in measuring both muscular power and anaerobic capacity in a 30-second time period; however, there are no published normative tables by which to compare WAnT performance in men and women intercollegiate athletics. The purpose of this study was to develop a classification system for anaerobic peak power and anaerobic capacity for men and women National Collegiate Athletic Association (NCAA) Division I college athletes using the WAnT. A total of 1,585 (1,374 men and 211 women) tests were conducted on athletes ranging from the ages of 18 to 25 years using the WAnT. Absolute and relative peak power and anaerobic capacity data were recorded. One-half standard deviations were used to set up a 7-tier classification system (poor to elite) for these assessments. These classifications can be used by athletes, coaches, and practitioners to evaluate anaerobic peak power and anaerobic capacity in their athletes.
Tridon, F.; Battaglia, A.; Luke, E.; ...
2017-01-27
A recently developed technique retrieving the binned raindrop size distributions (DSDs) and air state parameters from ground-based K a and W-band radars Doppler spectra profiles is improved and applied to a typical midlatitude rain event. The retrievals are thoroughly validated against DSD observations of a 2D video disdrometer and independent X-band observations. Here for this case-study, profiles of rain rate, R, mean volume diameter and concentration parameter are retrieved, with low bias and standard deviations. In light rain (0.1 < R < 1 mm h -1), the radar reflectivities must be calibrated with a collocated disdrometer which introduces random errorsmore » due to sampling mismatch between the two instruments. The best performances are obtained in moderate rain (1 < R < 20 mm h -1) where the retrieval is providing self-consistent estimates of the absolute calibration and of the attenuation caused by antenna or radome wetness for both radars.« less
Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dasgupta, Hirak
2016-12-01
The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.
A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.
McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B
2017-02-01
We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.
Telling in-tune from out-of-tune: widespread evidence for implicit absolute intonation.
Van Hedger, Stephen C; Heald, Shannon L M; Huang, Alex; Rutstein, Brooke; Nusbaum, Howard C
2017-04-01
Absolute pitch (AP) is the rare ability to name or produce an isolated musical note without the aid of a reference note. One skill thought to be unique to AP possessors is the ability to provide absolute intonation judgments (e.g., classifying an isolated note as "in-tune" or "out-of-tune"). Recent work has suggested that absolute intonation perception among AP possessors is not crystallized in a critical period of development, but is dynamically maintained by the listening environment, in which the vast majority of Western music is tuned to a specific cultural standard. Given that all listeners of Western music are constantly exposed to this specific cultural tuning standard, our experiments address whether absolute intonation perception extends beyond AP possessors. We demonstrate that non-AP listeners are able to accurately judge the intonation of completely isolated notes. Both musicians and nonmusicians showed evidence for absolute intonation recognition when listening to familiar timbres (piano and violin). When testing unfamiliar timbres (triangle and inverted sine waves), only musicians showed weak evidence of absolute intonation recognition (Experiment 2). Overall, these results highlight a previously unknown similarity between AP and non-AP possessors' long-term musical note representations, including evidence of sensitivity to frequency.
Al Shafouri, N; Narvey, M; Srinivasan, G; Vallance, J; Hansen, G
2015-01-01
In neonatal hypoxic ischemic encephalopathy (HIE), hypo- and hyperglycemia have been associated with poor outcomes. However, glucose variability has not been reported in this population. To examine the association between serum glucose variability within the first 24 hours and two-year neurodevelopmental outcomes in neonates cooled for HIE. In this retrospective cohort study, glucose, clinical and demographic data were documented from 23 term newborns treated with whole body therapeutic hypothermia. Severe neurodevelopmental outcomes from planned two-year assessments were defined as the presence of any one of the following: Gross Motor Function Classification System levels 3 to 5, Bayley III Motor Standard Score <70, Bayley III Language Score <70 and Bayley III Cognitive Standard Score <70. The neurodevelopmental outcomes from 8 of 23 patients were considered severe, and this group demonstrated a significant increase of mean absolute glucose (MAG) change (-0.28 to -0.03, 95% CI, p = 0.032). There were no significant differences between outcome groups with regards to number of patients with hyperglycemic means, one or multiple hypo- or hyperglycemic measurement(s). There were also no differences between both groups with mean glucose, although mean glucose standard deviation was approaching significance. Poor neurodevelopmental outcomes in whole body cooled HIE neonates are significantly associated with MAG changes. This information may be relevant for prognostication and potential management strategies.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
NASA Astrophysics Data System (ADS)
Bertincourt, B.; Lagache, G.; Martin, P. G.; Schulz, B.; Conversi, L.; Dassas, K.; Maurin, L.; Abergel, A.; Beelen, A.; Bernard, J.-P.; Crill, B. P.; Dole, H.; Eales, S.; Gudmundsson, J. E.; Lellouch, E.; Moreno, R.; Perdereau, O.
2016-04-01
We compare the absolute gain photometric calibration of the Planck/HFI and Herschel/SPIRE instruments on diffuse emission. The absolute calibration of HFI and SPIRE each relies on planet flux measurements and comparison with theoretical far-infrared emission models of planetary atmospheres. We measure the photometric cross calibration between the instruments at two overlapping bands, 545 GHz/500 μm and 857 GHz/350 μm. The SPIRE maps used have been processed in the Herschel Interactive Processing Environment (Version 12) and the HFI data are from the 2015 Public Data Release 2. For our study we used 15 large fields observed with SPIRE, which cover a total of about 120 deg2. We have selected these fields carefully to provide high signal-to-noise ratio, avoid residual systematics in the SPIRE maps, and span a wide range of surface brightness. The HFI maps are bandpass-corrected to match the emission observed by the SPIRE bandpasses. The SPIRE maps are convolved to match the HFI beam and put on a common pixel grid. We measure the cross-calibration relative gain between the instruments using two methods in each field, pixel-to-pixel correlation and angular power spectrum measurements. The SPIRE/HFI relative gains are 1.047 (±0.0069) and 1.003 (±0.0080) at 545 and 857 GHz, respectively, indicating very good agreement between the instruments. These relative gains deviate from unity by much less than the uncertainty of the absolute extended emission calibration, which is about 6.4% and 9.5% for HFI and SPIRE, respectively, but the deviations are comparable to the values 1.4% and 5.5% for HFI and SPIRE if the uncertainty from models of the common calibrator can be discounted. Of the 5.5% uncertainty for SPIRE, 4% arises from the uncertainty of the effective beam solid angle, which impacts the adopted SPIRE point source to extended source unit conversion factor, highlighting that as a focus for refinement.
LD-SPatt: large deviations statistics for patterns on Markov chains.
Nuel, G
2004-01-01
Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.
NASA Astrophysics Data System (ADS)
Wziontek, H.; Palinkas, V.; Falk, R.; Vaľko, M.
2016-12-01
Since decades, absolute gravimeters are compared on a regular basis on an international level, starting at the International Bureau for Weights and Measures (BIPM) in 1981. Usually, these comparisons are based on constant reference values deduced from all accepted measurements acquired during the comparison period. Temporal changes between comparison epochs are usually not considered. Resolution No. 2, adopted by IAG during the IUGG General Assembly in Prague 2015, initiates the establishment of a Global Absolute Gravity Reference System based on key comparisons of absolute gravimeters (AG) under the International Committee for Weights and Measures (CIPM) in order to establish a common level in the microGal range. A stable and unique reference frame can only be achieved, if different AG are taking part in different kind of comparisons. Systematic deviations between the respective comparison reference values can be detected, if the AG can be considered stable over time. The continuous operation of superconducting gravimeters (SG) on selected stations further supports the temporal link of comparison reference values by establishing a reference function over time. By a homogenous reprocessing of different comparison epochs and including AG and SG time series at selected stations, links between several comparisons will be established and temporal comparison reference functions will be derived. By this, comparisons on a regional level can be traced to back to the level of key comparisons, providing a reference for other absolute gravimeters. It will be proved and discussed, how such a concept can be used to support the future absolute gravity reference system.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Morikawa, Kei; Kurimoto, Noriaki; Inoue, Takeo; Mineshita, Masamichi; Miyazawa, Teruomi
2015-01-01
Endobronchial ultrasonography using a guide sheath (EBUS-GS) is an increasingly common bronchoscopic technique, but currently, no methods have been established to quantitatively evaluate EBUS images of peripheral pulmonary lesions. The purpose of this study was to evaluate whether histogram data collected from EBUS-GS images can contribute to the diagnosis of lung cancer. Histogram-based analyses focusing on the brightness of EBUS images were retrospectively conducted: 60 patients (38 lung cancer; 22 inflammatory diseases), with clear EBUS images were included. For each patient, a 400-pixel region of interest was selected, typically located at a 3- to 5-mm radius from the probe, from recorded EBUS images during bronchoscopy. Histogram height, width, height/width ratio, standard deviation, kurtosis and skewness were investigated as diagnostic indicators. Median histogram height, width, height/width ratio and standard deviation were significantly different between lung cancer and benign lesions (all p < 0.01). With a cutoff value for standard deviation of 10.5, lung cancer could be diagnosed with an accuracy of 81.7%. Other characteristics investigated were inferior when compared to histogram standard deviation. Histogram standard deviation appears to be the most useful characteristic for diagnosing lung cancer using EBUS images. © 2015 S. Karger AG, Basel.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
In-vivo studies of reflectance pulse oximeter sensor
NASA Astrophysics Data System (ADS)
Ling, Jian; Takatani, Setsuo; Noon, George P.; Nose, Yukihiko
1993-08-01
Reflectance oximetry can offer an advantage of being applicable to any portion of the body. However, the major problem of reflectance oximetry is low pulsatile signal level which prevents prolonged clinical application during extreme situations, such as hypothermia and vasoconstriction. In order to improve the pulsatile signal level of reflectance pulse oximeter and thus its accuracy, three different sensors, with the separation distances (SPD) between light emitting diode (LED) and photodiode being 3, 5, and 7 mm respectively, were studied on nine healthy volunteers. With the increase of the SPD, it was found that both the red (660 nm) and near-infrared (830 nm) pulsatile to average signal ratio (AC/DC) increased, and the standard deviations of (AC/DC)red/(AC/DC)infrared ratio decreased, in spite of the decrease of the absolute signal level. Further clinical studies of 3 mm and 7 mm SPD sensors on seven patients also showed that the (AC/DC)red/(AC/DC)infrared ratio measured by the 7 mm sensor were less disturbed than the 3 mm sensor during the surgery. A theoretical study based on the three-dimensional photon diffusion theory supports the experimental and clinical results. As a conclusion, the 7 mm sensor has the highest signal-to- noise ratio among three different sensors. A new 7 mm SPD reflectance sensor, with the increased number of LEDs around the photodiode, was designed to increase the AC/DC ratio, as well as to increase the absolute signal level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherpak, Amanda
Purpose: The Octavius 1000{sup SRS} detector was commissioned in December 2014 and is used routinely for verification of all SRS and SBRT plans. Results of verifications were analyzed to assess trends and limitations of the device and planning methods. Methods: Plans were delivered using a True Beam STx and results were evaluated using gamma analysis (95%, 3%/3mm) and absolute dose difference (5%). Verification results were analyzed based on several plan parameters including tumour volume, degree of modulation and prescribed dose. Results: During a 12 month period, a total of 124 patient plans were verified using the Octavius detector. Thirteen plansmore » failed the gamma criteria, while 7 plans failed based on the absolute dose difference. When binned according to degree of modulation, a significant correlation was found between MU/cGy and both mean dose difference (r=0.78, p<0.05) and gamma (r=−0.60, p<0.05). When data was binned according to tumour volume, the standard deviation of average gamma dropped from 2.2% – 3.7% for the volumes less than 30 cm{sup 3} to below 1% for volumes greater than 30 cm{sup 3}. Conclusions: The majority of plans and verification failures involved tumour volumes smaller than 30 cm{sup 3}. This was expected due to the nature of disease treated with SBRT and SRS techniques and did not increase rate of failure. Correlations found with MU/cGy indicate that as modulation increased, results deteriorated but not beyond the previously set thresholds.« less
NASA Astrophysics Data System (ADS)
Szczapa, Tomasz; Karpiński, Łukasz; Moczko, Jerzy; Weindling, Michael; Kornacka, Alicja; Wróblewska, Katarzyna; Adamczak, Aleksandra; Jopek, Aleksandra; Chojnacka, Karolina; Gadzinowski, Janusz
2013-08-01
The aim of this study is to compare a two-wavelength light emitting diode-based tissue oximeter (INVOS), which is designed to show trends in tissue oxygenation, with a four-wavelength laser-based oximeter (FORE-SIGHT), designed to deliver absolute values of tissue oxygenation. Simultaneous values of cerebral tissue oxygenation (StO2) are measured using both devices in 15 term and 15 preterm clinically stable newborns on the first and third day of life. Values are recorded simultaneously in two periods between which oximeter sensor positions are switched to the contralateral side. Agreement between StO2 values before and after the change of sensor position is analyzed. We find that mean cerebral StO2 values are similar between devices for term and preterm babies, but INVOS shows StO2 values spread over a wider range, with wider standard deviations than shown by the FORE-SIGHT. There is relatively good agreement with a bias up to 3.5% and limits of agreement up to 11.8%. Measurements from each side of the forehead show better repeatability for the FORE-SIGHT monitor. We conclude that performance of the two devices is probably acceptable for clinical purposes. Both performed sufficiently well, but the use of FORE-SIGHT may be associated with tighter range and better repeatability of data.
A precision measurement of the neutron 2. Probing the color force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Posik, Matthew R.
2014-01-01
The g 2 nucleon spin-dependent structure function measured in electron deep inelastic scattering contains information beyond the simple parton model description of the nucleon. It provides insight into quark-gluon correlations and a path to access the confining local color force a struck quark experiences just as it is hit by the virtual photon due to the remnant di-quark. The quantity d 2, a measure of this local color force, has its information encoded in an x 2 weighted integral of a linear combination of spin structure functions g 1 and g 2 and thus is dominated by the valence-quark regionmore » at large momentum fraction x. To date, theoretical calculations and experimental measurements of the neutron d 2 differ by about two standard deviations. Therefore, JLab experiment E06-014, performed in Hall A, made a precision measurement of this quantity at two mean four momentum transfers values of 3.21 and 4.32 GeV 2. Double spin asymmetries and absolute cross-sections were measured in both DIS and resonance regions by scattering longitudinally polarized electrons at beam energies of 4.74 and 5.89 GeV from a longitudinally and transversely polarized 3He target. Results for the absolute cross-sections and spin structure functions on 3He will be presented in the dissertation, as well as results for the neutron d 2 and extracted color forces.« less
Landing Gear Noise Prediction and Analysis for Tube-and-Wing and Hybrid-Wing-Body Aircraft
NASA Technical Reports Server (NTRS)
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
Improvements and extensions to landing gear noise prediction methods are developed. New features include installation effects such as reflection from the aircraft, gear truck angle effect, local flow calculation at the landing gear locations, gear size effect, and directivity for various gear designs. These new features have not only significantly improved the accuracy and robustness of the prediction tools, but also have enabled applications to unconventional aircraft designs and installations. Systematic validations of the improved prediction capability are then presented, including parametric validations in functional trends as well as validations in absolute amplitudes, covering a wide variety of landing gear designs, sizes, and testing conditions. The new method is then applied to selected concept aircraft configurations in the portfolio of the NASA Environmentally Responsible Aviation Project envisioned for the timeframe of 2025. The landing gear noise levels are on the order of 2 to 4 dB higher than previously reported predictions due to increased fidelity in accounting for installation effects and gear design details. With the new method, it is now possible to reveal and assess the unique noise characteristics of landing gear systems for each type of aircraft. To address the inevitable uncertainties in predictions of landing gear noise models for future aircraft, an uncertainty analysis is given, using the method of Monte Carlo simulation. The standard deviation of the uncertainty in predicting the absolute level of landing gear noise is quantified and determined to be 1.4 EPNL dB.
Comparison of MM/GBSA calculations based on explicit and implicit solvent simulations.
Godschalk, Frithjof; Genheden, Samuel; Söderhjelm, Pär; Ryde, Ulf
2013-05-28
Molecular mechanics with generalised Born and surface area solvation (MM/GBSA) is a popular method to calculate the free energy of the binding of ligands to proteins. It involves molecular dynamics (MD) simulations with an explicit solvent of the protein-ligand complex to give a set of snapshots for which energies are calculated with an implicit solvent. This change in the solvation method (explicit → implicit) would strictly require that the energies are reweighted with the implicit-solvent energies, which is normally not done. In this paper we calculate MM/GBSA energies with two generalised Born models for snapshots generated by the same methods or by explicit-solvent simulations for five synthetic N-acetyllactosamine derivatives binding to galectin-3. We show that the resulting energies are very different both in absolute and relative terms, showing that the change in the solvent model is far from innocent and that standard MM/GBSA is not a consistent method. The ensembles generated with the various solvent models are quite different with root-mean-square deviations of 1.2-1.4 Å. The ensembles can be converted to each other by performing short MD simulations with the new method, but the convergence is slow, showing mean absolute differences in the calculated energies of 6-7 kJ mol(-1) after 2 ps simulations. Minimisations show even slower convergence and there are strong indications that the energies obtained from minimised structures are different from those obtained by MD.
Autoshaping as a psychophysical paradigm: Absolute visual sensitivity in the pigeon
Passe, Dennis H.
1981-01-01
A classical conditioning procedure (autoshaping) was used to determine absolute visual threshold in the pigeon. This method provides the basis for a standardized visual psychophysical paradigm. PMID:16812228
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Deviations. 961.4 Section 961.4 Energy DEPARTMENT OF ENERGY STANDARD CONTRACT FOR DISPOSAL OF SPENT NUCLEAR FUEL AND/OR HIGH-LEVEL RADIOACTIVE WASTE General § 961.4 Deviations. Requests for authority to deviate from this part shall be submitted in writing to...
3D shape measurements with a single interferometric sensor for in-situ lathe monitoring
NASA Astrophysics Data System (ADS)
Kuschmierz, R.; Huang, Y.; Czarske, J.; Metschke, S.; Löffler, F.; Fischer, A.
2015-05-01
Temperature drifts, tool deterioration, unknown vibrations as well as spindle play are major effects which decrease the achievable precision of computerized numerically controlled (CNC) lathes and lead to shape deviations between the processed work pieces. Since currently no measurement system exist for fast, precise and in-situ 3d shape monitoring with keyhole access, much effort has to be made to simulate and compensate these effects. Therefore we introduce an optical interferometric sensor for absolute 3d shape measurements, which was integrated into a working lathe. According to the spindle rotational speed, a measurement rate of 2,500 Hz was achieved. In-situ absolute shape, surface profile and vibration measurements are presented. While thermal drifts of the sensor led to errors of several mµm for the absolute shape, reference measurements with a coordinate machine show, that the surface profile could be measured with an uncertainty below one micron. Additionally, the spindle play of 0.8 µm was measured with the sensor.
A global algorithm for estimating Absolute Salinity
NASA Astrophysics Data System (ADS)
McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.
2012-12-01
The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
Stenzel, O; Wilbrandt, S; Wolf, J; Schürmann, M; Kaiser, N; Ristau, D; Ehlers, H; Carstens, F; Schippel, S; Mechold, L; Rauhut, R; Kennedy, M; Bischoff, M; Nowitzki, T; Zöller, A; Hagedorn, H; Reus, H; Hegemann, T; Starke, K; Harhausen, J; Foest, R; Schumacher, J
2017-02-01
Random effects in the repeatability of refractive index and absorption edge position of tantalum pentoxide layers prepared by plasma-ion-assisted electron-beam evaporation, ion beam sputtering, and magnetron sputtering are investigated and quantified. Standard deviations in refractive index between 4*10-4 and 4*10-3 have been obtained. Here, lowest standard deviations in refractive index close to our detection threshold could be achieved by both ion beam sputtering and plasma-ion-assisted deposition. In relation to the corresponding mean values, the standard deviations in band-edge position and refractive index are of similar order.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
ACCESS: integration and pre-flight performance
NASA Astrophysics Data System (ADS)
Kaiser, Mary Elizabeth; Morris, Matthew J.; Aldoroty, Lauren N.; Pelton, Russell; Kurucz, Robert; Peacock, Grant O.; Hansen, Jason; McCandliss, Stephan R.; Rauscher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Wright, Edward L.; Orndorff, Joseph D.; Feldman, Paul D.; Moos, H. Warren; Riess, Adam G.; Gardner, Jonathan P.; Bohlin, Ralph; Deustua, Susana E.; Dixon, W. V.; Sahnow, David J.; Perlmutter, Saul
2017-09-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 - 1.7μm bandpass. This paper describes the sub-system testing, payload integration, avionics operations, and data transfer for the ACCESS instrument.
One-milliarsecond precision parallax studies in the regions of Delta Cephei and EV Lacertae
NASA Technical Reports Server (NTRS)
Gatewood, George; De Jonge, Kiewiet Joost; Stephenson, Bruce
1993-01-01
Trigonometric parallaxes for stars in the regions of the variable stars delta Cephei and EV Lacertae are derived from data collected with the Multichannel Astrometric Photometer (MAP) and the Thaw Refractor of the University of Pittsburgh's Allegheny Observatory. The weighted mean parallax of all trigonometric studies of delta Cephei is now + 0.0030 sec + or - 0.00093 sec, corresponding to a distance modulus of 7.61 + or - 0.67 mag. This indicates that this luminosity standard star is approximately one standard deviation more distance than has been generally accepted. The weighted mean trigonometric parallax of all studies of the variable star EV Lacertae (BD + 43 deg 4305) is + 0.1993 sec + or - 0.00093 sec, implying a distance modulus of - 1.498 + or - 0.0010 mag. The calculated absolute magnitude of this star is almost exactly that predicted by its (R-I)(sub Kron) magnitude and by the Gliese (R-I) main-sequence value for stars in the solar neighborhood. We also find a parallax of 0.0189 sec + or - 0.0008 sec for the FO IVn star, HR 8666 (BD + 43 sec 4300). The derived luminosity of this star is midway between that expected for luminosity class IV and V stars at the indicated temperature.
Sand dune ridge alignment effects on surface BRF over the Libya-4 CEOS calibration site.
Govaerts, Yves M
2015-02-03
The Libya-4 desert area, located in the Great Sand Sea, is one of the most important bright desert CEOS pseudo-invariant calibration sites by its size and radiometric stability. This site is intensively used for radiometer drift monitoring, sensor intercalibration and as an absolute calibration reference based on simulated radiances traceable to the SI standard. The Libya-4 morphology is composed of oriented sand dunes shaped by dominant winds. The effects of sand dune spatial organization on the surface bidirectional reflectance factor is analyzed in this paper using Raytran, a 3D radiative transfer model. The topography is characterized with the 30 m resolution ASTER digital elevation model. Four different regions-of-interest sizes, ranging from 10 km up to 100 km, are analyzed. Results show that sand dunes generate more backscattering than forward scattering at the surface. The mean surface reflectance averaged over different viewing and illumination angles is pretty much independent of the size of the selected area, though the standard deviation differs. Sun azimuth position has an effect on the surface reflectance field, which is more pronounced for high Sun zenith angles. Such 3D azimuthal effects should be taken into account to decrease the simulated radiance uncertainty over Libya-4 below 3% for wavelengths larger than 600 nm.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
NASA Technical Reports Server (NTRS)
Sheldon, R. B.
1994-01-01
We have studied the transport and loss of H(+), He(+), and He(++) ions in the Earth's quiet time ring current (1 to 300 keV/e, 3 to 7 R(sub E), Kp less than 2+, absolute value of Dst less than 11, 70 to 110 degs pitchangles, all LT) comparing the standard radial diffusion model developed for the higher-energy radiation belt particles with measurements of the lower energy ring current ions in a previous paper. Large deviations of that model, which fit only 50% of the data to within a factor of 10, suggested that another transport mechanism is operating in the ring current. Here we derive a modified diffusion coefficient corrected for electric field effects on ring current energy ions that fit nearly 80% of the data to within a factor of 2. Thus we infer that electric field fluctuations from the low-latitude to midlatitude ionosphere (ionospheric dynamo) dominated the ring current transport, rather than high-latitude or solar wind fluctuations. Much of the remaining deviation may arise from convective electric field transport of the E less than 30 keV particles. Since convection effects cannot be correctly treated with this azimuthally symmetric model, we defer treatment of the lowest-energy ions to a another paper. We give chi(exp 2) contours for the best fit, showing the dependence of the fit upon the internal/external spectral power of the predicted electric and magnetic field fluctuations.
Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C
2011-12-01
Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Absolute calibration of a hydrogen discharge lamp in the vacuum ultraviolet
NASA Technical Reports Server (NTRS)
Nealy, J. E.
1975-01-01
A low-pressure hydrogen discharge lamp was calibrated for radiant intensity in the vacuum ultraviolet spectral region on an absolute basis and was employed as a laboratory standard source in spectrograph calibrations. This calibration was accomplished through the use of a standard photodiode detector obtained from the National Bureau of Standards together with onsite measurements of spectral properties of optical components used. The stability of the light source for use in the calibration of vacuum ultraviolet spectrographs and optical systems was investigated and found to be amenable to laboratory applications. The lamp was studied for a range of operating parameters; the results indicate that with appropriate peripheral instrumentation, the light source can be used as a secondary laboratory standard source when operated under preset controlled conditions. Absolute intensity measurements were recorded for the wavelengths 127.7, 158.0, 177.5, and 195.0 nm for a time period of over 1 month, and the measurements were found to be repeatable to within 11 percent.
Hopper, John L.
2015-01-01
How can the “strengths” of risk factors, in the sense of how well they discriminate cases from controls, be compared when they are measured on different scales such as continuous, binary, and integer? Given that risk estimates take into account other fitted and design-related factors—and that is how risk gradients are interpreted—so should the presentation of risk gradients. Therefore, for each risk factor X0, I propose using appropriate regression techniques to derive from appropriate population data the best fitting relationship between the mean of X0 and all the other covariates fitted in the model or adjusted for by design (X1, X2, … , Xn). The odds per adjusted standard deviation (OPERA) presents the risk association for X0 in terms of the change in risk per s = standard deviation of X0 adjusted for X1, X2, … , Xn, rather than the unadjusted standard deviation of X0 itself. If the increased risk is relative risk (RR)-fold over A adjusted standard deviations, then OPERA = exp[ln(RR)/A] = RRs. This unifying approach is illustrated by considering breast cancer and published risk estimates. OPERA estimates are by definition independent and can be used to compare the predictive strengths of risk factors across diseases and populations. PMID:26520360
NASA Astrophysics Data System (ADS)
Muji Susantoro, Tri; Wikantika, Ketut; Saepuloh, Asep; Handoyo Harsolumakso, Agus
2018-05-01
Selection of vegetation indices in plant mapping is needed to provide the best information of plant conditions. The methods used in this research are the standard deviation and the linear regression. This research tried to determine the vegetation indices used for mapping the sugarcane conditions around oil and gas fields. The data used in this study is Landsat 8 OLI/TIRS. The standard deviation analysis on the 23 vegetation indices with 27 samples has resulted in the six highest standard deviations of vegetation indices, termed as GRVI, SR, NLI, SIPI, GEMI and LAI. The standard deviation values are 0.47; 0.43; 0.30; 0.17; 0.16 and 0.13. Regression correlation analysis on the 23 vegetation indices with 280 samples has resulted in the six vegetation indices, termed as NDVI, ENDVI, GDVI, VARI, LAI and SIPI. This was performed based on regression correlation with the lowest value R2 than 0,8. The combined analysis of the standard deviation and the regression correlation has obtained the five vegetation indices, termed as NDVI, ENDVI, GDVI, LAI and SIPI. The results of the analysis of both methods show that a combination of two methods needs to be done to produce a good analysis of sugarcane conditions. It has been clarified through field surveys and showed good results for the prediction of microseepages.
Intensity stabilisation of optical pulse sequences for coherent control of laser-driven qubits
NASA Astrophysics Data System (ADS)
Thom, Joseph; Yuen, Ben; Wilpers, Guido; Riis, Erling; Sinclair, Alastair G.
2018-05-01
We demonstrate a system for intensity stabilisation of optical pulse sequences used in laser-driven quantum control of trapped ions. Intensity instability is minimised by active stabilisation of the power (over a dynamic range of > 104) and position of the focused beam at the ion. The fractional Allan deviations in power were found to be <2.2 × 10^{-4} for averaging times from 1 to 16,384 s. Over similar times, the absolute Allan deviation of the beam position is <0.1 μm for a 45 {μ }m beam diameter. Using these residual power and position instabilities, we estimate the associated contributions to infidelity in example qubit logic gates to be below 10^{-6} per gate.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
Validation of the CrIS fast physical NH3 retrieval with ground-based FTIR
NASA Astrophysics Data System (ADS)
Dammers, Enrico; Shephard, Mark W.; Palm, Mathias; Cady-Pereira, Karen; Capps, Shannon; Lutsch, Erik; Strong, Kim; Hannigan, James W.; Ortega, Ivan; Toon, Geoffrey C.; Stremme, Wolfgang; Grutter, Michel; Jones, Nicholas; Smale, Dan; Siemons, Jacob; Hrpcek, Kevin; Tremblay, Denis; Schaap, Martijn; Notholt, Justus; Erisman, Jan Willem
2017-07-01
Presented here is the validation of the CrIS (Cross-track Infrared Sounder) fast physical NH3 retrieval (CFPR) column and profile measurements using ground-based Fourier transform infrared (FTIR) observations. We use the total columns and profiles from seven FTIR sites in the Network for the Detection of Atmospheric Composition Change (NDACC) to validate the satellite data products. The overall FTIR and CrIS total columns have a positive correlation of r = 0.77 (N = 218) with very little bias (a slope of 1.02). Binning the comparisons by total column amounts, for concentrations larger than 1.0 × 1016 molecules cm-2, i.e. ranging from moderate to polluted conditions, the relative difference is on average ˜ 0-5 % with a standard deviation of 25-50 %, which is comparable to the estimated retrieval uncertainties in both CrIS and the FTIR. For the smallest total column range (< 1.0 × 1016 molecules cm-2) where there are a large number of observations at or near the CrIS noise level (detection limit) the absolute differences between CrIS and the FTIR total columns show a slight positive column bias. The CrIS and FTIR profile comparison differences are mostly within the range of the single-level retrieved profile values from estimated retrieval uncertainties, showing average differences in the range of ˜ 20 to 40 %. The CrIS retrievals typically show good vertical sensitivity down into the boundary layer which typically peaks at ˜ 850 hPa (˜ 1.5 km). At this level the median absolute difference is 0.87 (std = ±0.08) ppb, corresponding to a median relative difference of 39 % (std = ±2 %). Most of the absolute and relative profile comparison differences are in the range of the estimated retrieval uncertainties. At the surface, where CrIS typically has lower sensitivity, it tends to overestimate in low-concentration conditions and underestimate in higher atmospheric concentration conditions.
Collinearity in Least-Squares Analysis
ERIC Educational Resources Information Center
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Estimating maize water stress by standard deviation of canopy temperature in thermal imagery
USDA-ARS?s Scientific Manuscript database
A new crop water stress index using standard deviation of canopy temperature as an input was developed to monitor crop water status. In this study, thermal imagery was taken from maize under various levels of deficit irrigation treatments in different crop growing stages. The Expectation-Maximizatio...
MSTAR: an absolute metrology sensor with sub-micron accuracy for space-based applications
NASA Technical Reports Server (NTRS)
Peters, Robert D.; Lay, Oliver P.; Dubovitsky, Serge; Burger, Johan P.; Jeganathan, Muthu
2004-01-01
The MSTAR sensor is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with subnanometer accuracy.
Determining absolute protein numbers by quantitative fluorescence microscopy.
Verdaasdonk, Jolien Suzanne; Lawrimore, Josh; Bloom, Kerry
2014-01-01
Biological questions are increasingly being addressed using a wide range of quantitative analytical tools to examine protein complex composition. Knowledge of the absolute number of proteins present provides insights into organization, function, and maintenance and is used in mathematical modeling of complex cellular dynamics. In this chapter, we outline and describe three microscopy-based methods for determining absolute protein numbers--fluorescence correlation spectroscopy, stepwise photobleaching, and ratiometric comparison of fluorescence intensity to known standards. In addition, we discuss the various fluorescently labeled proteins that have been used as standards for both stepwise photobleaching and ratiometric comparison analysis. A detailed procedure for determining absolute protein number by ratiometric comparison is outlined in the second half of this chapter. Counting proteins by quantitative microscopy is a relatively simple yet very powerful analytical tool that will increase our understanding of protein complex composition. © 2014 Elsevier Inc. All rights reserved.
Chen, Shaoqiang; Zhu, Lin; Yoshita, Masahiro; Mochizuki, Toshimitsu; Kim, Changsu; Akiyama, Hidefumi; Imaizumi, Mitsuru; Kanemitsu, Yoshihiko
2015-01-01
World-wide studies on multi-junction (tandem) solar cells have led to record-breaking improvements in conversion efficiencies year after year. To obtain detailed and proper feedback for solar-cell design and fabrication, it is necessary to establish standard methods for diagnosing subcells in fabricated tandem devices. Here, we propose a potential standard method to quantify the detailed subcell properties of multi-junction solar cells based on absolute measurements of electroluminescence (EL) external quantum efficiency in addition to the conventional solar-cell external-quantum-efficiency measurements. We demonstrate that the absolute-EL-quantum-efficiency measurements provide I–V relations of individual subcells without the need for referencing measured I–V data, which is in stark contrast to previous works. Moreover, our measurements quantify the absolute rates of junction loss, non-radiative loss, radiative loss, and luminescence coupling in the subcells, which constitute the “balance sheets” of tandem solar cells. PMID:25592484
Cozzi, Bruno; De Giorgio, Andrea; Peruffo, A; Montelli, S; Panin, M; Bombardi, C; Grandis, A; Pirone, A; Zambenedetti, P; Corain, L; Granato, Alberto
2017-08-01
The architecture of the neocortex classically consists of six layers, based on cytological criteria and on the layout of intra/interlaminar connections. Yet, the comparison of cortical cytoarchitectonic features across different species proves overwhelmingly difficult, due to the lack of a reliable model to analyze the connection patterns of neuronal ensembles forming the different layers. We first defined a set of suitable morphometric cell features, obtained in digitized Nissl-stained sections of the motor cortex of the horse, chimpanzee, and crab-eating macaque. We then modeled them using a quite general non-parametric data representation model, showing that the assessment of neuronal cell complexity (i.e., how a given cell differs from its neighbors) can be performed using a suitable measure of statistical dispersion such as the mean absolute deviation-mean absolute deviation (MAD). Along with the non-parametric combination and permutation methodology, application of MAD allowed not only to estimate, but also to compare and rank the motor cortical complexity across different species. As to the instances presented in this paper, we show that the pyramidal layers of the motor cortex of the horse are far more irregular than those of primates. This feature could be related to the different organizations of the motor system in monodactylous mammals.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
Responsiveness of a Brief Measure of Lung Cancer Screening Knowledge.
Housten, Ashley J; Lowenstein, Lisa M; Leal, Viola B; Volk, Robert J
2016-12-14
Our aim was to examine the responsiveness of a lung cancer screening brief knowledge measure (LCS-12). Eligible participants were aged 55-80 years, current smokers or had quit within 15 years, and English speaking. They completed a baseline pretest survey, viewed a lung cancer screening video-based patient decision aid, and then filled out a follow-up posttest survey. We performed a paired samples t-test, calculated effect size, and calculated absolute and relative percent improvement for each item. Participants (n = 30) were primarily White (63%) with less than a college degree (63%), and half were female (50%). Mean age was 61.5 years (standard deviation [SD] = 4.67) and average smoking history was 30.4 pack-years (range = 4.6-90.0). Mean score on the 12-item measure increased from 47.3% correct on the pretest to 80.3% correct on the posttest (mean pretest score = 5.67 vs. mean posttest score = 9.63; mean score difference = 3.97, SD = 2.87, 95% CI = 2.90, 5.04). Total knowledge scores improved significantly and were responsive to the decision aid intervention (paired samples t-test = 7.57, p < .001; Cohen's effect size = 1.59; standard response mean [SRM] = 1.38). All individual items were responsive, yet two items had lower absolute responsiveness than the others (item 8: "Without screening, is lung cancer often found at a later stage when cure is less likely?" pretest correct = 83.3% vs. posttest = 96.7%, responsiveness = 13.4%; and item 10: "Can a CT scan find lung disease that is not cancer?" pretest correct = 80.0% vs. posttest = 93.3%, responsiveness = 13.3%). The LCS-12 knowledge measure may be a useful outcome measure of shared decision making for lung cancer screening.
NASA Astrophysics Data System (ADS)
Baynham, Charles F. A.; Godun, Rachel M.; Jones, Jonathan M.; King, Steven A.; Nisbet-Jones, Peter B. R.; Baynes, Fred; Rolland, Antoine; Baird, Patrick E. G.; Bongs, Kai; Gill, Patrick; Margolis, Helen S.
2018-03-01
The highly forbidden ? electric octupole transition in ? is a potential candidate for a redefinition of the SI second. We present a measurement of the absolute frequency of this optical transition, performed using a frequency link to International Atomic Time to provide traceability to the SI second. The ? optical frequency standard was operated for 76% of a 25-day period, with the absolute frequency measured to be 642 121 496 772 645.14(26) Hz. The fractional uncertainty of ? is comparable to that of the best previously reported measurement, which was made by a direct comparison to local caesium primary frequency standards.
YALE NATURAL RADIOCARBON MEASUREMENTS. PART VI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuiver, M.; Deevey, E.S.
1961-01-01
Most of the measurements made since publication of Yale V are included; some measurements, such as a series collected in Greenland, are withneld pending additional information or field work that will make better interpretations possible. In addition to radiocarbon dates of geologic and/or archaeologic interest, recent assays are given of C/sup 14/ in lake waters and other lacustrine materials, now normalized for C/sup 13/ content. The newly accepted convention is followed in expressing normalized C/sup 14/ values as DELTA = delta C/sup 14/ (2 delta C/sup 13/ + 50)STAl + ( delta C/sup 14//1000)! where DELTA is the per milmore » deviation of the C/sup 14/ if the sample from any contemporary standard (whether organic or a carbonate) after correction of sample and/or standard for real age, for the Suess effect, for normal isotopic fractionation, and for deviations of C/sup 14/ content of the age- and pollution- corrected l9th-century wood standard from that of 95% of the NBS oxalic acid standard; delta C/sup 14/ is the measured deviation from 95% of the NBS standard, and delta C/sup 13/ is the deviation from the NBS limestone standard, both in per mil. These assays are variously affected by artificial C/sup 14/ resulting from nuclear tests. (auth)« less
NASA Astrophysics Data System (ADS)
Wu, Xiaoru; Gao, Yingyu; Ban, Chunlan; Huang, Qiang
2016-09-01
In this paper the results of the vapor-liquid equilibria study at 100 kPa are presented for two binary systems: α-phenylethylamine(1) + toluene (2) and (α-phenylethylamine(1) + cyclohexane(2)). The binary VLE data of the two systems were correlated by the Wilson, NRTL, and UNIQUAC models. For each binary system the deviations between the results of the correlations and the experimental data have been calculated. For the both binary systems the average relative deviations in temperature for the three models were lower than 0.99%. The average absolute deviations in vapour phase composition (mole fractions) and in temperature T were lower than 0.0271 and 1.93 K, respectively. Thermodynamic consistency has been tested for all vapor-liquid equilibrium data by the Herrington method. The values calculated by Wilson and NRTL equations satisfied the thermodynamics consistency test for the both two systems, while the values calculated by UNIQUAC equation didn't.
Yanagihara, Nobuyuki; Seki, Meikan; Nakano, Masahiro; Hachisuga, Toru; Goto, Yukio
2014-06-01
Disturbance of autonomic nervous activity has been thought to play a role in the climacteric symptoms of postmenopausal women. This study was therefore designed to investigate the relationship between autonomic nervous activity and climacteric symptoms in postmenopausal Japanese women. The autonomic nervous activity of 40 Japanese women with climacteric symptoms and 40 Japanese women without climacteric symptoms was measured by power spectral analysis of heart rate variability using a standard hexagonal radar chart. The scores for climacteric symptoms were determined using the simplified menopausal index. Sympathetic excitability and irritability, as well as the standard deviation of mean R-R intervals in supine position, were significantly (P < 0.01, 0.05, and 0.001, respectively) decreased in women with climacteric symptoms. There was a negative correlation between the standard deviation of mean R-R intervals in supine position and the simplified menopausal index score. The lack of control for potential confounding variables was a limitation of this study. In climacteric women, the standard deviation of mean R-R intervals in supine position is negatively correlated with the simplified menopausal index score.
On the Photometric Calibration of FORS2 and the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Bramich, D.; Moehler, S.; Coccato, L.; Freudling, W.; Garcia-Dabó, C. E.; Müller, P.; Saviane, I.
2012-09-01
An accurate absolute calibration of photometric data to place them on a standard magnitude scale is very important for many science goals. Absolute calibration requires the observation of photometric standard stars and analysis of the observations with an appropriate photometric model including all relevant effects. In the FORS Absolute Photometry (FAP) project, we have developed a standard star observing strategy and modelling procedure that enables calibration of science target photometry to better than 3% accuracy on photometrically stable nights given sufficient signal-to-noise. In the application of this photometric modelling to large photometric databases, we have investigated the Sloan Digital Sky Survey (SDSS) and found systematic trends in the published photometric data. The amplitudes of these trends are similar to the reported typical precision (˜1% and ˜2%) of the SDSS photometry in the griz- and u-bands, respectively.
Kito, Keiji; Okada, Mitsuhiro; Ishibashi, Yuko; Okada, Satoshi; Ito, Takashi
2016-05-01
The accurate and precise absolute abundance of proteins can be determined using mass spectrometry by spiking the sample with stable isotope-labeled standards. In this study, we developed a strategy of hierarchical use of peptide-concatenated standards (PCSs) to quantify more proteins over a wider dynamic range. Multiple primary PCSs were used for quantification of many target proteins. Unique "ID-tag peptides" were introduced into individual primary PCSs, allowing us to monitor the exact amounts of individual PCSs using a "secondary PCS" in which all "ID-tag peptides" were concatenated. Furthermore, we varied the copy number of the "ID-tag peptide" in each PCS according to a range of expression levels of target proteins. This strategy accomplished absolute quantification over a wider range than that of the measured ratios. The quantified abundance of budding yeast proteins showed a high reproducibility for replicate analyses and similar copy numbers per cell for ribosomal proteins, demonstrating the accuracy and precision of this strategy. A comparison with the absolute abundance of transcripts clearly indicated different post-transcriptional regulation of expression for specific functional groups. Thus, the approach presented here is a faithful method for the absolute quantification of proteomes and provides insights into biological mechanisms, including the regulation of expressed protein abundance. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sleiman, Mohamad; Chen, Sharon; Gilbert, Haley E.
A laboratory method to simulate natural exposure of roofing materials has been reported in a companion article. Here in the current article, we describe the results of an international, nine-participant interlaboratory study (ILS) conducted in accordance with ASTM Standard E691-09 to establish the precision and reproducibility of this protocol. The accelerated soiling and weathering method was applied four times by each laboratory to replicate coupons of 12 products representing a wide variety of roofing categories (single-ply membrane, factory-applied coating (on metal), bare metal, field-applied coating, asphalt shingle, modified-bitumen cap sheet, clay tile, and concrete tile). Participants reported initial and laboratory-agedmore » values of solar reflectance and thermal emittance. Measured solar reflectances were consistent within and across eight of the nine participating laboratories. Measured thermal emittances reported by six participants exhibited comparable consistency. For solar reflectance, the accelerated aging method is both repeatable and reproducible within an acceptable range of standard deviations: the repeatability standard deviation sr ranged from 0.008 to 0.015 (relative standard deviation of 1.2–2.1%) and the reproducibility standard deviation sR ranged from 0.022 to 0.036 (relative standard deviation of 3.2–5.8%). The ILS confirmed that the accelerated aging method can be reproduced by multiple independent laboratories with acceptable precision. In conclusion, this study supports the adoption of the accelerated aging practice to speed the evaluation and performance rating of new cool roofing materials.« less
Zhang, Xiangrong; Zhang, Dan; Xu, Jinghua; Gu, Jingkai; Zhao, Yuqing
2007-10-15
A sensitive and specific liquid chromatography/tandem mass spectrometry (LC/MS/MS) method was developed for the investigation of the pharmacokinetics of 20(R)-dammarane-3beta,12beta,20,25-tetrol (25-OH-PPD) in rat. Ginsenoside Rh(2) was employed as an internal standard. The plasma samples were pretreated by liquid-liquid extraction and analyzed using LC/MS/MS with an electrospray ionization interface. The mobile phase consisted of methanol-acetonitrile-10 mmol/l aqueous ammonium acetate (42.5:42.5:15, v:v:v), which was pumped at 0.4 ml/min. The analytical column (50 mm x 2.1 mm i.d.) was packed with Venusil XBP C8 material (3.5 microm). The standard curve was linear from 10 to 3000 ng/ml. The assay was specific, accurate (accuracy between -1.19 and 2.57% for all quality control samples), precise and reproducible (within- and between-day precisions measured as relative standard deviation were <5% and <7%, respectively). 25-OH-PPD in rat plasma was stable over three freeze-thaw cycles and at ambient temperatures for 6h. The method had a lower limit of quantitation of 10 ng/ml, which offered a satisfactory sensitivity for the determination of (25-OH-PPD) in plasma. This quantitation method was successfully applied to pharmacokinetic studies of 25-OH-PPD after both an oral and an intravenous administration to rats and the absolute bioavailability is 64.8+/-14.3%.
Simulation and Analysis of Topographic Effect on Land Surface Albedo over Mountainous Areas
NASA Astrophysics Data System (ADS)
Hao, D.; Wen, J.; Xiao, Q.
2017-12-01
Land surface albedo is one of the significant geophysical variables affecting the Earth's climate and controlling the surface radiation budget. Topography leads to the formation of shadows and the redistribution of incident radiation, which complicates the modeling and estimation of the land surface albedo. Some studies show that neglecting the topography effect may lead to significant bias in estimating the land surface albedo for the sloping terrain. However, for the composite sloping terrain, the topographic effects on the albedo remain unclear. Accurately estimating the sub-topographic effect on the land surface albedo over the composite sloping terrain presents a challenge for remote sensing modeling and applications. In our study, we focus on the development of a simplified estimation method for land surface albedo including black-sky albedo (BSA) and white-sky albedo (WSA) of the composite sloping terrain at a kilometer scale based on the fine scale DEM (30m) and quantitatively investigate and understand the topographic effects on the albedo. The albedo is affected by various factors such as solar zenith angle (SZA), solar azimuth angle (SAA), shadows, terrain occlusion, and slope and aspect distribution of the micro-slopes. When SZA is 30°, the absolute and relative deviations between the BSA of flat terrain and that of rugged terrain reaches 0.12 and 50%, respectively. When the mean slope of the terrain is 30.63° and SZA=30°, the absolute deviation of BSA caused by SAA can reach 0.04. The maximal relative and relative deviation between the WSA of flat terrain and that of rugged terrain reaches 0.08 and 50%. These results demonstrate that the topographic effect has to be taken into account in the albedo estimation.
Gödel, Tarski, Turing and the Conundrum of Free Will
NASA Astrophysics Data System (ADS)
Nayakar, Chetan S. Mandayam; Srikanth, R.
2014-07-01
The problem of defining and locating free will (FW) in physics is studied. On the basis of logical paradoxes, we argue that FW has a metatheoretic character, like the concept of truth in Tarski's undefinability theorem. Free will exists relative to a base theory if there is freedom to deviate from the deterministic or indeterministic dynamics in the theory, with the deviations caused by parameters (representing will) in the meta-theory. By contrast, determinism and indeterminism do not require meta-theoretic considerations in their formalization, making FW a fundamentally new causal primitive. FW exists relative to the meta-theory if there is freedom for deviation, due to higher-order causes. Absolute free will, which corresponds to our intuitive introspective notion of free will, exists if this meta-theoretic hierarchy is infinite. We argue that this hierarchy corresponds to higher levels of uncomputability. In other words, at any finitely high order in the hierarchy, there are uncomputable deviations from the law at that order. Applied to the human condition, the hierarchy corresponds to deeper levels of the subconscious or unconscious mind. Possible ramifications of our model for physics, neuroscience and artificial intelligence (AI) are briefly considered.
Selection and Classification Using a Forecast Applicant Pool.
ERIC Educational Resources Information Center
Hendrix, William H.
The document presents a forecast model of the future Air Force applicant pool. By forecasting applicants' quality (means and standard deviations of aptitude scores) and quantity (total number of applicants), a potential enlistee could be compared to the forecasted pool. The data used to develop the model consisted of means, standard deviation, and…
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne;
2014-01-01
We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, D; Meier, J; Mawlawi, O
Purpose: Use a NEMA-IEC PET phantom to assess the robustness of FDG-PET-based radiomics features to changes in reconstruction parameters across different scanners. Methods: We scanned a NEMA-IEC PET phantom on 3 different scanners (GE Discovery VCT, GE Discovery 710, and Siemens mCT) using a FDG source-to-background ratio of 10:1. Images were retrospectively reconstructed using different iterations (2–3), subsets (21–24), Gaussian filter widths (2, 4, 6mm), and matrix sizes (128,192,256). The 710 and mCT used time-of-flight and point-spread-functions in reconstruction. The axial-image through the center of the 6 active spheres was used for analysis. A region-of-interest containing all spheres was ablemore » to simulate a heterogeneous lesion due to partial volume effects. Maximum voxel deviations from all retrospectively reconstructed images (18 per scanner) was compared to our standard clinical protocol. PET Images from 195 non-small cell lung cancer patients were used to compare feature variation. The ratio of a feature’s standard deviation from the patient cohort versus the phantom images was calculated to assess for feature robustness. Results: Across all images, the percentage of voxels differing by <1SUV and <2SUV ranged from 61–92% and 88–99%, respectively. Voxel-voxel similarity decreased when using higher resolution image matrices (192/256 versus 128) and was comparable across scanners. Taking the ratio of patient and phantom feature standard deviation was able to identify features that were not robust to changes in reconstruction parameters (e.g. co-occurrence correlation). Metrics found to be reasonably robust (standard deviation ratios > 3) were observed for routinely used SUV metrics (e.g. SUVmean and SUVmax) as well as some radiomics features (e.g. co-occurrence contrast, co-occurrence energy, standard deviation, and uniformity). Similar standard deviation ratios were observed across scanners. Conclusions: Our method enabled a comparison of feature variability across scanners and was able to identify features that were not robust to changes in reconstruction parameters.« less
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
40 CFR 63.7751 - What reports must I submit and when?
Code of Federal Regulations, 2010 CFR
2010-07-01
... deviations from any emissions limitations (including operating limit), work practice standards, or operation and maintenance requirements, a statement that there were no deviations from the emissions limitations...-of-control during the reporting period. (7) For each deviation from an emissions limitation...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merchant, Thomas E., E-mail: thomas.merchant@stjude.or; Chitti, Ramana M.; Li Chenghong
Purpose: To identify risk factors associated with incomplete neurological recovery in pediatric patients with infratentorial ependymoma treated with postoperative conformal radiation therapy (CRT). Methods: The study included 68 patients (median age +- standard deviation of 2.6 +- 3.8 years) who were followed for 5 years after receiving CRT (54-59.4 Gy) and were assessed for function of cranial nerves V to VII and IX to XII, motor weakness, and dysmetria. The mean (+- standard deviation) brainstem dose was 5,487 (+-464) cGy. Patients were divided into four groups representing those with normal baseline and follow-up, those with abnormal baseline and full recovery,more » those with abnormal baseline and partial or no recovery, and those with progressive deficits at 12 (n = 62 patients), 24 (n = 57 patients), and 60 (n = 50 patients) months. Grouping was correlated with clinical and treatment factors. Results: Risk factors (overall risk [OR], p value) associated with incomplete recovery included gender (male vs. female, OR = 3.97, p = 0.036) and gross tumor volume (GTV) (OR/ml = 1.23, p = 0.005) at 12 months, the number of resections (>1 vs. 1; OR = 23.7, p = 0.003) and patient age (OR/year = 0.77, p = 0.029) at 24 months, and cerebrospinal fluid (CSF) shunting (Yes vs. No; OR = 21.9, p = 0.001) and GTV volume (OR/ml = 1.18, p = 0.008) at 60 months. An increase in GTV correlated with an increase in the number of resections (p = 0.001) and CSF shunting (p = 0.035); the number of resections correlated with CSF shunting (p < 0.0001), and male patients were more likely to undergo multiple tumor resections (p = 0.003). Age correlated with brainstem volume (p < 0.0001). There were no differences in outcome based on the absolute or relative volume of the brainstem that received more than 54 Gy. Conclusions: Incomplete recovery of brainstem function after CRT for infratentorial ependymoma is related to surgical morbidity and the volume and the extent of tumor.« less
Online pretreatment verification of high-dose rate brachytherapy using an imaging panel
NASA Astrophysics Data System (ADS)
Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank
2017-07-01
Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFadden, Derek; Zhang Beibei; Brock, Kristy K.
Purpose: Increasing the magnetic resonance imaging (MRI) field strength can improve image resolution and quality, but concerns remain regarding the influence on geometric fidelity. The objectives of the present study were to spatially investigate the effect of 3-Tesla (3T) MRI on clinical target localization for stereotactic radiosurgery. Methods and Materials: A total of 39 patients were enrolled in a research ethics board-approved prospective clinical trial. Imaging (1.5T and 3T MRI and computed tomography) was performed after stereotactic frame placement. Stereotactic target localization at 1.5T vs. 3T was retrospectively analyzed in a representative cohort of patients with tumor (n = 4)more » and functional (n = 5) radiosurgical targets. The spatial congruency of the tumor gross target volumes was determined by the mean discrepancy between the average gross target volume surfaces at 1.5T and 3T. Reproducibility was assessed by the displacement from an averaged surface and volume congruency. Spatial congruency and the reproducibility of functional radiosurgical targets was determined by comparing the mean and standard deviation of the isocenter coordinates. Results: Overall, the mean absolute discrepancy across all patients was 0.67 mm (95% confidence interval, 0.51-0.83), significantly <1 mm (p < .010). No differences were found in the overall interuser target volume congruence (mean, 84% for 1.5T vs. 84% for 3T, p > .4), and the gross target volume surface mean displacements were similar within and between users. The overall average isocenter coordinate discrepancy for the functional targets at 1.5T and 3T was 0.33 mm (95% confidence interval, 0.20-0.48), with no patient-specific differences between the mean values (p >.2) or standard deviations (p >.1). Conclusion: Our results have provided clinically relevant evidence supporting the spatial validity of 3T MRI for use in stereotactic radiosurgery under the imaging conditions used.« less
Atherton, Rachel R.; Williams, Jane E.; Wells, Jonathan C. K.; Fewtrell, Mary S.
2013-01-01
Background Clinical application of body composition (BC) measurements for individual children has been limited by lack of appropriate reference data. Objectives (1) To compare fat mass (FM) and fat free mass (FFM) standard deviation scores (SDS) generated using new body composition reference data and obtained using simple measurement methods in healthy children and patients with those obtained using the reference 4-component (4-C) model; (2) To determine the extent to which scores from simple methods agree with those from the 4-C model in identification of abnormal body composition. Design FM SDS were calculated for 4-C model, dual-energy X-ray absorptiometry (DXA; GE Lunar Prodigy), BMI and skinfold thicknesses (SFT); and FFM SDS for 4CM, DXA and bioelectrical impedance analysis (BIA; height2/Z)) in 927 subjects aged 3.8–22.0 y (211 healthy, 716 patients). Results DXA was the most accurate method for both FM and FFM SDS in healthy subjects and patients (mean bias (limits of agreement) FM SDS 0.03 (±0.62); FFM SDS −0.04 (±0.72)), and provided best agreement with the 4-C model in identifying abnormal BC (SDS ≤−2 or ≥2). BMI and SFTs were reasonable predictors of abnormal FM SDS, but poor in providing an absolute value. BIA was comparable to DXA for FFM SDS and in identifying abnormal subjects. Conclusions DXA may be used both for research and clinically to determine FM and FFM SDS. BIA may be used to assess FFM SDS in place of DXA. BMI and SFTs can be used to measure adiposity for groups but not individuals. The performance of simpler techniques in monitoring longitudinal BC changes requires investigation. Ultimately, the most appropriate method should be determined by its predictive value for clinical outcome. PMID:23690932
Lunar terrain mapping and relative-roughness analysis
Rowan, Lawrence C.; McCauley, John F.; Holm, Esther A.
1971-01-01
Terrain maps of the equatorial zone (long 70° E.-70° W. and lat 10° N-10° S.) were prepared at scales of 1:2,000,000 and 1:1,000,000 to classify lunar terrain with respect to roughness and to provide a basis for selecting sites for Surveyor and Apollo landings as well as for Ranger and Lunar Orbiter photographs. The techniques that were developed as a result of this effort can be applied to future planetary exploration. By using the best available earth-based observational data and photographs 1:1,000,000-scale and U.S. Geological Survey lunar geologic maps and U.S. Air Force Aeronautical Chart and Information Center LAC charts, lunar terrain was described by qualitative and quantitative methods and divided into four fundamental classes: maria, terrae, craters, and linear features. Some 35 subdivisions were defined and mapped throughout the equatorial zone, and, in addition, most of the map units were illustrated by photographs. The terrain types were analyzed quantitatively to characterize and order their relative-roughness characteristics. Approximately 150,000 east-west slope measurements made by a photometric technique (photoclinometry) in 51 sample areas indicate that algebraic slope-frequency distributions are Gaussian, and so arithmetic means and standard deviations accurately describe the distribution functions. The algebraic slope-component frequency distributions are particularly useful for rapidly determining relative roughness of terrain. The statistical parameters that best describe relative roughness are the absolute arithmetic mean, the algebraic standard deviation, and the percentage of slope reversal. Statistically derived relative-relief parameters are desirable supplementary measures of relative roughness in the terrae. Extrapolation of relative roughness for the maria was demonstrated using Ranger VII slope-component data and regional maria slope data, as well as the data reported here. It appears that, for some morphologically homogeneous mare areas, relative roughness can be extrapolated to the large scales from measurements at small scales.
Atherton, Rachel R; Williams, Jane E; Wells, Jonathan C K; Fewtrell, Mary S
2013-01-01
Clinical application of body composition (BC) measurements for individual children has been limited by lack of appropriate reference data. (1) To compare fat mass (FM) and fat free mass (FFM) standard deviation scores (SDS) generated using new body composition reference data and obtained using simple measurement methods in healthy children and patients with those obtained using the reference 4-component (4-C) model; (2) To determine the extent to which scores from simple methods agree with those from the 4-C model in identification of abnormal body composition. FM SDS were calculated for 4-C model, dual-energy X-ray absorptiometry (DXA; GE Lunar Prodigy), BMI and skinfold thicknesses (SFT); and FFM SDS for 4CM, DXA and bioelectrical impedance analysis (BIA; height(2)/Z)) in 927 subjects aged 3.8-22.0 y (211 healthy, 716 patients). DXA was the most accurate method for both FM and FFM SDS in healthy subjects and patients (mean bias (limits of agreement) FM SDS 0.03 (± 0.62); FFM SDS -0.04 (± 0.72)), and provided best agreement with the 4-C model in identifying abnormal BC (SDS ≤-2 or ≥ 2). BMI and SFTs were reasonable predictors of abnormal FM SDS, but poor in providing an absolute value. BIA was comparable to DXA for FFM SDS and in identifying abnormal subjects. DXA may be used both for research and clinically to determine FM and FFM SDS. BIA may be used to assess FFM SDS in place of DXA. BMI and SFTs can be used to measure adiposity for groups but not individuals. The performance of simpler techniques in monitoring longitudinal BC changes requires investigation. Ultimately, the most appropriate method should be determined by its predictive value for clinical outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carver, R; Popple, R; Benhabib, S
Purpose: To evaluate the accuracy of electron dose distribution calculated by the Varian Eclipse electron Monte Carlo (eMC) algorithm for use with recent commercially available bolus electron conformal therapy (ECT). Methods: eMC-calculated electron dose distributions for bolus ECT have been compared to those previously measured for cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV CT anatomy for each site. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The bolus ECT treatment plans were imported into the Eclipse treatment planning system and calculated using the maximum allowable histories (2×10{sup 9}),more » resulting in a statistical error of <0.2%. Smoothing was not used for these calculations. Differences between eMC-calculated and measured dose distributions were evaluated in terms of absolute dose difference as well as distance to agreement (DTA). Results: Results from the eMC for the retromolar trigone phantom showed 89% (41/46) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of −0.12% with a standard deviation of 2.56%. Results for the nose phantom showed 95% (54/57) of dose points within 3% dose difference or 3 mm DTA. There was an average dose difference of 1.12% with a standard deviation of 3.03%. Dose calculation times for the retromolar trigone and nose treatment plans were 15 min and 22 min, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a Varian Eclipse framework agent server (FAS). Results of this study were consistent with those previously reported for accuracy of the eMC electron dose algorithm and for the .decimal, Inc. pencil beam redefinition algorithm used to plan the bolus. Conclusion: These results show that the accuracy of the Eclipse eMC algorithm is suitable for clinical implementation of bolus ECT.« less
Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.
Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank
2017-07-07
Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.
NASA Astrophysics Data System (ADS)
Alexakis, Dimitrios; Seiradakis, Kostas; Tsanis, Ioannis
2016-04-01
This article presents a remote sensing approach for spatio-temporal monitoring of both soil erosion and roughness using an Unmanned Aerial Vehicle (UAV). Soil erosion by water is commonly known as one of the main reasons for land degradation. Gully erosion causes considerable soil loss and soil degradation. Furthermore, quantification of soil roughness (irregularities of the soil surface due to soil texture) is important and affects surface storage and infiltration. Soil roughness is one of the most susceptible to variation in time and space characteristics and depends on different parameters such as cultivation practices and soil aggregation. A UAV equipped with a digital camera was employed to monitor soil in terms of erosion and roughness in two different study areas in Chania, Crete, Greece. The UAV followed predicted flight paths computed by the relevant flight planning software. The photogrammetric image processing enabled the development of sophisticated Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimeter level. The DTMs were developed using photogrammetric processing of more than 500 images acquired with the UAV from different heights above the ground level. As the geomorphic formations can be observed from above using UAVs, shadowing effects do not generally occur and the generated point clouds have very homogeneous and high point densities. The DTMs generated from UAV were compared in terms of vertical absolute accuracies with a Global Navigation Satellite System (GNSS) survey. The developed data products were used for quantifying gully erosion and soil roughness in 3D as well as for the analysis of the surrounding areas. The significant elevation changes from multi-temporal UAV elevation data were used for estimating diachronically soil loss and sediment delivery without installing sediment traps. Concerning roughness, statistical indicators of surface elevation point measurements were estimated and various parameters such as standard deviation of DTM, deviation of residual and standard deviation of prominence were calculated directly from the extracted DTM. Sophisticated statistical filters and elevation indices were developed to quantify both soil erosion and roughness. The applied methodology for monitoring both soil erosion and roughness provides an optimum way of reducing the existing gap between field scale and satellite scale. Keywords : UAV, soil, erosion, roughness, DTM
Amézquita, A; Weller, C L; Wang, L; Thippareddi, H; Burson, D E
2005-05-25
Numerous small meat processors in the United States have difficulties complying with the stabilization performance standards for preventing growth of Clostridium perfringens by 1 log10 cycle during cooling of ready-to-eat (RTE) products. These standards were established by the Food Safety and Inspection Service (FSIS) of the US Department of Agriculture in 1999. In recent years, several attempts have been made to develop predictive models for growth of C. perfringens within the range of cooling temperatures included in the FSIS standards. Those studies mainly focused on microbiological aspects, using hypothesized cooling rates. Conversely, studies dealing with heat transfer models to predict cooling rates in meat products do not address microbial growth. Integration of heat transfer relationships with C. perfringens growth relationships during cooling of meat products has been very limited. Therefore, a computer simulation scheme was developed to analyze heat transfer phenomena and temperature-dependent C. perfringens growth during cooling of cooked boneless cured ham. The temperature history of ham was predicted using a finite element heat diffusion model. Validation of heat transfer predictions used experimental data collected in commercial meat-processing facilities. For C. perfringens growth, a dynamic model was developed using Baranyi's nonautonomous differential equation. The bacterium's growth model was integrated into the computer program using predicted temperature histories as input values. For cooling cooked hams from 66.6 degrees C to 4.4 degrees C using forced air, the maximum deviation between predicted and experimental core temperature data was 2.54 degrees C. Predicted C. perfringens growth curves obtained from dynamic modeling showed good agreement with validated results for three different cooling scenarios. Mean absolute values of relative errors were below 6%, and deviations between predicted and experimental cell counts were within 0.37 log10 CFU/g. For a cooling process which was in exact compliance with the FSIS stabilization performance standards, a mean net growth of 1.37 log10 CFU/g was predicted. This study introduced the combination of engineering modeling and microbiological modeling as a useful quantitative tool for general food safety applications, such as risk assessment and hazard analysis and critical control points (HACCP) plans.
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
ERIC Educational Resources Information Center
Nienhusser, H. Kenny; Oshio, Toko
2017-01-01
High school students' accuracy in estimating the cost of college (AECC) was examined by utilizing a new methodological approach, the absolute-deviation-continuous construct. This study used the High School Longitudinal Study of 2009 (HSLS:09) data and examined 10,530 11th grade students in order to measure their AECC for 4-year public and private…
Evaluation of radiochromic gel dosimetry and polymer gel dosimetry in a clinical dose verification
NASA Astrophysics Data System (ADS)
Vandecasteele, Jan; De Deene, Yves
2013-09-01
A quantitative comparison of two full three-dimensional (3D) gel dosimetry techniques was assessed in a clinical setting: radiochromic gel dosimetry with an in-house developed optical laser CT scanner and polymer gel dosimetry with magnetic resonance imaging (MRI). To benchmark both gel dosimeters, they were exposed to a 6 MV photon beam and the depth dose was compared against a diamond detector measurement that served as golden standard. Both gel dosimeters were found accurate within 4% accuracy. In the 3D dose matrix of the radiochromic gel, hotspot dose deviations up to 8% were observed which are attributed to the fabrication procedure. The polymer gel readout was shown to be sensitive to B0 field and B1 field non-uniformities as well as temperature variations during scanning. The performance of the two gel dosimeters was also evaluated for a brain tumour IMRT treatment. Both gel measured dose distributions were compared against treatment planning system predicted dose maps which were validated independently with ion chamber measurements and portal dosimetry. In the radiochromic gel measurement, two sources of deviations could be identified. Firstly, the dose in a cluster of voxels near the edge of the phantom deviated from the planned dose. Secondly, the presence of dose hotspots in the order of 10% related to inhomogeneities in the gel limit the clinical acceptance of this dosimetry technique. Based on the results of the micelle gel dosimeter prototype presented here, chemical optimization will be subject of future work. Polymer gel dosimetry is capable of measuring the absolute dose in the whole 3D volume within 5% accuracy. A temperature stabilization technique is incorporated to increase the accuracy during short measurements, however keeping the temperature stable during long measurement times in both calibration phantoms and the volumetric phantom is more challenging. The sensitivity of MRI readout to minimal temperature fluctuations is demonstrated which proves the need for adequate compensation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Bai, W
Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less
Mediterranean diet score and total and cardiovascular mortality in Eastern Europe: the HAPIEE study.
Stefler, Denes; Malyutina, Sofia; Kubinova, Ruzena; Pajak, Andrzej; Peasey, Anne; Pikhart, Hynek; Brunner, Eric J; Bobak, Martin
2017-02-01
Mediterranean-type dietary pattern has been associated with lower risk of cardiovascular (CVD) and other chronic diseases, primarily in Southern European populations. We examined whether Mediterranean diet score (MDS) is associated with total, CVD, coronary heart disease (CHD) and stroke mortality in a prospective cohort study in three Eastern European populations. A total of 19,333 male and female participants of the Health Alcohol and Psychosocial factors in Eastern Europe (HAPIEE) study in the Czech Republic, Poland and the Russian Federation were included in the analysis. Diet was assessed by food frequency questionnaire, and MDS was derived from consumption of nine groups of food using absolute cut-offs. Mortality was ascertained by linkage with death registers. Over the median follow-up time of 7 years, 1314 participants died. The proportion of participants with high adherence to Mediterranean diet was low (25 %). One standard deviation (SD) increase in the MDS (equivalent to 2.2 point increase in the score) was found to be inversely associated with death from all causes (HR, 95 % CI 0.93, 0.88-0.98) and CVD (0.90, 0.81-0.99) even after multivariable adjustment. Inverse but statistically not significant link was found for CHD (0.90, 0.78-1.03) and stroke (0.87, 0.71-1.07). The MDS effects were similar in each country cohort. Higher adherence to the Mediterranean diet was associated with reduced risk of total and CVD deaths in these large Eastern European urban populations. The application of MDS with absolute cut-offs appears suitable for non-Mediterranean populations.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Petrowski, Katja; Kliem, Sören; Sadler, Michael; Meuret, Alicia E; Ritz, Thomas; Brähler, Elmar
2018-02-06
Demands placed on individuals in occupational and social settings, as well as imbalances in personal traits and resources, can lead to chronic stress. The Trier Inventory for Chronic Stress (TICS) measures chronic stress while incorporating domain-specific aspects, and has been found to be a highly reliable and valid research tool. The aims of the present study were to confirm the German version TICS factorial structure in an English translation of the instrument (TICS-E) and to report its psychometric properties. A random route sample of healthy participants (N = 483) aged 18-30 years completed the TICS-E. The robust maximum likelihood estimation with a mean-adjusted chi-square test statistic was applied due to the sample's significant deviation from the multivariate normal distribution. Goodness of fit, absolute model fit, and relative model fit were assessed by means of the root mean square error of approximation (RMSEA), the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). Reliability estimates (Cronbach's α and adjusted split-half reliability) ranged from .84 to .92. Item-scale correlations ranged from .50 to .85. Measures of fit showed values of .052 for RMSEA (Cl = 0.50-.054) and .067 for SRMR for absolute model fit, and values of .846 (TLI) and .855 (CFI) for relative model-fit. Factor loadings ranged from .55 to .91. The psychometric properties and factor structure of the TICS-E are comparable to the German version of the TICS. The instrument therefore meets quality standards for an adequate measurement of chronic stress.
In Vivo Measurement of Pediatric Vocal Fold Motion Using Structured Light Laser Projection
Patel, Rita R.; Donohue, Kevin D.; Lau, Daniel; Unnikrishnan, Harikrishnan
2013-01-01
Summary Objective The aim of the study was to present the development of a miniature structured light laser projection endoscope and to quantify vocal fold length and vibratory features related to impact stress of the pediatric glottis using high-speed imaging. Study Design The custom-developed laser projection system consists of a green laser with a 4-mm diameter optics module at the tip of the endoscope, projecting 20 vertical laser lines on the glottis. Measurements of absolute phonatory vocal fold length, membranous vocal fold length, peak amplitude, amplitude-to-length ratio, average closing velocity, and impact velocity were obtained in five children (6–9 years), two adult male and three adult female participants without voice disorders, and one child (10 years) with bilateral vocal fold nodules during modal phonation. Results Independent measurements made on the glottal length of a vocal fold phantom demonstrated a 0.13 mm bias error with a standard deviation of 0.23 mm, indicating adequate precision and accuracy for measuring vocal fold structures and displacement. First, in vivo measurements of amplitude-to-length ratio, peak closing velocity, and impact velocity during phonation in pediatric population and a child with vocal fold nodules are reported. Conclusion The proposed laser projection system can be used to obtain in vivo measurements of absolute length and vibratory features in children and adults. Children have large amplitude-to-length ratio compared with typically developing adults, whereas nodules result in larger peak amplitude, amplitude-to-length ratio, average closing velocity, and impact velocity compared with typically developing children. PMID:23809569
Absolute auditory threshold: testing the absolute.
Heil, Peter; Matysiak, Artur
2017-11-02
The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Song, Yifei; Kujofsa, Tedi; Ayers, John E.
2018-07-01
In order to evaluate various buffer layers for metamorphic devices, threading dislocation densities have been calculated for uniform composition In x Ga1- x As device layers deposited on GaAs (001) substrates with an intermediate graded buffer layer using the L MD model, where L MD is the average length of misfit dislocations. On this basis, we compare the relative effectiveness of buffer layers with linear, exponential, and S-graded compositional profiles. In the case of a 2 μm thick buffer layer linear grading results in higher threading dislocation densities in the device layer compared to either exponential or S-grading. When exponential grading is used, lower threading dislocation densities are obtained with a smaller length constant. In the S-graded case, lower threading dislocation densities result when a smaller standard deviation parameter is used. As the buffer layer thickness is decreased from 2 μm to 0.1 μm all of the above effects are diminished, and the absolute threading dislocation densities increase.
Sebastián-Ruiz, María José; Guerra-Sáenz, Elda Karina; Vargas-Yamanaka, Anna Karen; Barboza-Quintana, Oralia; Ríos-Zambudio, Antonio; García-Cabello, Ricardo; Palacios-Saucedo, Gerardo Del Carmen
2017-01-01
To evaluate the knowledge and attitude towards organ donation of medicine students of a Northwestern Mexico public university. A prolective, descriptive, observational, and cross-sectional study. A 34 items cross-sectional survey evaluating knowledge and attitude towards organ donation in 3,056 medicine students during 2013-2015. Descriptive statistics were used as absolute frequencies, percentages, mean and standard deviation, as well as the Chi-square test. A p < 0.05 was considered significant. 74% of students would donate their own organs, mainly due to reciprocity (41%). 26% of students would not donate, 48% of them because of fear that their organs could be taken before death. 86% would donate organs from a relative. 64% have spoken about organ donation and transplantation with their family and 67% with friends. 50% said they had received no information about it. 68% understand the concept of brain death. Students received little information about organ donation during college. Despite that, most of them showed a positive attitude and are willing to donate. Copyright: © 2017 SecretarÍa de Salud
Application of a Line Laser Scanner for Bed Form Tracking in a Laboratory Flume
NASA Astrophysics Data System (ADS)
de Ruijsscher, T. V.; Hoitink, A. J. F.; Dinnissen, S.; Vermeulen, B.; Hazenberg, P.
2018-03-01
A new measurement method for continuous detection of bed forms in movable bed laboratory experiments is presented and tested. The device consists of a line laser coupled to a 3-D camera, which makes use of triangulation. This allows to measure bed forms during morphodynamic experiments, without removing the water from the flume. A correction is applied for the effect of laser refraction at the air-water interface. We conclude that the absolute measurement error increases with increasing flow velocity, its standard deviation increases with water depth and flow velocity, and the percentage of missing values increases with water depth. Although 71% of the data is lost in a pilot moving bed experiment with sand, still high agreement between flowing water and dry bed measurements is found when a robust LOcally weighted regrESSion (LOESS) procedure is applied. This is promising for bed form tracking applications in laboratory experiments, especially when lightweight sediments like polystyrene are used, which require smaller flow velocities to achieve dynamic similarity to the prototype. This is confirmed in a moving bed experiment with polystyrene.
Measurements of the earth radiation budget from satellites during the first GARP global experiment
NASA Technical Reports Server (NTRS)
Vonder Haar, T. H.; Campbell, G. G.; Smith, E. A.; Arking, A.; Coulson, K.; Hickey, J.; House, F.; Ingersoll, A.; Jacobowitz, H.; Smith, L.
1981-01-01
Radiation budget data (which will aid in climate model development) and solar constant measurements (both to be used for the study of long term climate change and interannual seasonal weather variability) are presented, obtained during Nimbus-6 and Nimbus-7 satellite flights, using wide-field-of-view, scanner, and black cavity detectors. Data on the solar constant, described as a function of the date of measurement, are given. The unweighed mean amounts to 1377 + or - 20 per sq Wm, with a standard deviation of 8 per sq Wm. The new solar data are combined with earlier measurements, and it is suggested that the total absolute energy output of the sun is a minimum at 'solar maximum' and vice versa. Attention is given to the measurements of the net radiation budget, the planetary albedo, and the infrared radiant exitance. The annual and semiannual cycles of normal variability explain most of the variance of energy exchange between the earth and space. Examination of separate ocean and atmospheric energy budgets implies a net continent-ocean region energy exchange.
Buhr, H; Büermann, L; Gerlach, M; Krumrey, M; Rabus, H
2012-12-21
For the first time the absolute photon mass energy-absorption coefficient of air in the energy range of 10 to 60 keV has been measured with relative standard uncertainties below 1%, considerably smaller than those of up to 2% assumed for calculated data. For monochromatized synchrotron radiation from the electron storage ring BESSY II both the radiant power and the fraction of power deposited in dry air were measured using a cryogenic electrical substitution radiometer and a free air ionization chamber, respectively. The measured absorption coefficients were compared with state-of-the art calculations and showed an average deviation of 2% from calculations by Seltzer. However, they agree within 1% with data calculated earlier by Hubbell. In the course of this work, an improvement of the data analysis of a previous experimental determination of the mass energy-absorption coefficient of air in the range of 3 to 10 keV was found to be possible and corrected values of this preceding study are given.
The Effect of Microphone Type on Acoustical Measures of Synthesized Vowels.
Kisenwether, Jessica Sofranko; Sataloff, Robert T
2015-09-01
The purpose of this study was to compare microphones of different directionality, transducer type, and cost, with attention to their effects on acoustical measurements of period perturbation, amplitude perturbation, and noise using synthesized sustained vowel samples. This was a repeated measures design. Synthesized sustained vowel stimuli (with known acoustic characteristics and systematic changes in jitter, shimmer, and noise-to-harmonics ratio) were recorded by a variety of dynamic and condenser microphones. Files were then analyzed for mean fundamental frequency (fo), fo standard deviation, absolute jitter, shimmer in dB, peak-to-peak amplitude variation, and noise-to-harmonics ratio. Acoustical measures following recording were compared with the synthesized, known acoustical measures before recording. Although informal analyses showed some differences among microphones, and analyses of variance showed that type of microphone is a significant predictor, t-tests revealed that none of the microphones generated different means compared with the generated acoustical measures. In this sample, microphone type, directionality, and cost did not have a significant effect on the validity of acoustic measures. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.