Sample records for absolute percentage errors

  1. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  2. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  3. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  4. Absolute and Relative Reliability of Percentage of Syllables Stuttered and Severity Rating Scales

    ERIC Educational Resources Information Center

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark

    2014-01-01

    Purpose: Percentage of syllables stuttered (%SS) and severity rating (SR) scales are measures in common use to quantify stuttering severity and its changes during basic and clinical research conditions. However, their reliability has not been assessed with indices measuring both relative and absolute reliability. This study was designed to provide…

  5. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing

  6. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    NASA Astrophysics Data System (ADS)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  7. Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles

    ERIC Educational Resources Information Center

    Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner

    2016-01-01

    This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…

  8. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  9. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  10. Optimal quantum error correcting codes from absolutely maximally entangled states

    NASA Astrophysics Data System (ADS)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  11. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  12. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  13. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  14. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    NASA Astrophysics Data System (ADS)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  15. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  16. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  17. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  18. Generalized approach for using unbiased symmetric metrics with negative values: normalized mean bias factor and normalized mean absolute error factor

    EPA Science Inventory

    Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...

  19. Total lymphocyte count as a predictor of absolute CD4+ count and CD4+ percentage in HIV-infected persons.

    PubMed

    Blatt, S P; Lucey, C R; Butzin, C A; Hendrix, C W; Lucey, D R

    1993-02-03

    To determine whether the total lymphocyte count (TLC) accurately predicts a low absolute CD4+ T-cell count and CD4+ percentage in persons infected with human immunodeficiency virus (HIV). Retrospective analysis of data collected in the US Air Force HIV Natural History Study. Military medical center that performs annual medical evaluation of all HIV-infected US Air Force personnel. A total of 828 consecutive patients with no prior history of zidovudine use, evaluated from January 1985 through July 1991. For patients with multiple observations over time, a single data point within each 6-month interval was included in the analysis (N = 2866). The sensitivity, specificity, and likelihood ratio (LR) of the TLC, in the range of 1.00 x 10(9)/L to 2.00 x 10(9)/L, in predicting an absolute CD4+ T-cell count less than 0.20 x 10(9)/L or a CD4+ percentage less than 20% were calculated. In addition, the LR and pretest probability of significant immunosuppression were used to calculate posttest probabilities of a low CD4+ count for a given TLC value. The LR of the TLC in predicting an absolute CD4+ count < 0.20 x 10(9)/L increased from 2.4 (95% confidence interval, 2.2 to 2.5) for all TLCs less than 2.00 x 10(9)/L, to 33.2 (95% confidence interval, 24.1 to 45.7) for all TLCs less than 1.00 x 10(9)/L. The specificity for this prediction increased from 57% to 97% over this range. The LR also increased from 1.4 (95% confidence interval, 1.3 to 1.6) for all TLCs less than 2.00 x 10(9)/L to 9.7 (95% confidence interval, 7.1 to 13.1) for all TLCs less than 1.00 x 10(9)/L in predicting a CD4+ percentage less than 20%. The TLC, between 1.00 x 10(9)/L and 2.00 x 10(9)/L, appears to be a useful predictor of significant immunosuppression as measured by a CD4+ T-cell count less than 0.20 x 10(9)/L in HIV-infected persons. The LR for a given TLC value and the pretest probability of immunosuppression can be used to determine the posttest probability of significant immunosuppression in

  20. Demonstrating the Error Budget for the Climate Absolute Radiance and Refractivity Observatory Through Solar Irradiance Measurements

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2016-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.

  1. Reliable absolute analog code retrieval approach for 3D measurement

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun

    2017-11-01

    The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.

  2. Absolute calibration of optical flats

    DOEpatents

    Sommargren, Gary E.

    2005-04-05

    The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.

  3. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  4. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.

  5. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  6. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  7. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  8. Absolute vs. relative error characterization of electromagnetic tracking accuracy

    NASA Astrophysics Data System (ADS)

    Matinfar, Mohammad; Narayanasamy, Ganesh; Gutierrez, Luis; Chan, Raymond; Jain, Ameet

    2010-02-01

    Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data unusable. We present a mapping method for the operating region over which EM tracking sensors are used, allowing for characterization of measurement errors, in turn providing physicians with visual feedback about measurement confidence or reliability of localization estimates. In this instance, we employ a calibration phantom to assess distortion within the operating field of the EM tracker and to display in real time the distribution of measurement errors, as well as the location and extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean") EM environment. The registration results in the locations of sensors with respect to each other and defines the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement and orientation) are computed. Based on error thresholds provided by the

  9. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  10. The PMA Catalogue: 420 million positions and absolute proper motions

    NASA Astrophysics Data System (ADS)

    Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.

    2017-07-01

    We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr-1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2-5 mas yr-1 for stars with 10 mag < G < 17 mag to 5-10 mas yr-1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.

  11. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors.

    PubMed

    Thipphavong, David P

    2016-09-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.

  12. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors

    PubMed Central

    Thipphavong, David P.

    2017-01-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%. PMID:28684883

  13. Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors

    NASA Technical Reports Server (NTRS)

    Thipphavong, David P.

    2016-01-01

    The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.

  14. Absolute measurement of the extreme UV solar flux

    NASA Technical Reports Server (NTRS)

    Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.

    1984-01-01

    A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.

  15. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  16. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  17. The absolute radiometric calibration of the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Teillet, P. M.; Ding, Y.

    1988-01-01

    The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.

  18. Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry.

    PubMed

    Li, Beiwen; Liu, Ziping; Zhang, Song

    2016-10-03

    We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.

  19. The effect of modeled absolute timing variability and relative timing variability on observational learning.

    PubMed

    Grierson, Lawrence E M; Roberts, James W; Welsher, Arthur M

    2017-05-01

    There is much evidence to suggest that skill learning is enhanced by skill observation. Recent research on this phenomenon indicates a benefit of observing variable/erred demonstrations. In this study, we explore whether it is variability within the relative organization or absolute parameterization of a movement that facilitates skill learning through observation. To do so, participants were randomly allocated into groups that observed a model with no variability, absolute timing variability, relative timing variability, or variability in both absolute and relative timing. All participants performed a four-segment movement pattern with specific absolute and relative timing goals prior to and following the observational intervention, as well as in a 24h retention test and transfers tests that featured new relative and absolute timing goals. Absolute timing error indicated that all groups initially acquired the absolute timing, maintained their performance at 24h retention, and exhibited performance deterioration in both transfer tests. Relative timing error revealed that the observation of no variability and relative timing variability produced greater performance at the post-test, 24h retention and relative timing transfer tests, but for the no variability group, deteriorated at absolute timing transfer test. The results suggest that the learning of absolute timing following observation unfolds irrespective of model variability. However, the learning of relative timing benefits from holding the absolute features constant, while the observation of no variability partially fails in transfer. We suggest learning by observing no variability and variable/erred models unfolds via similar neural mechanisms, although the latter benefits from the additional coding of information pertaining to movements that require a correction. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently

  1. The Absolute Magnitude of the Sun in Several Filters

    NASA Astrophysics Data System (ADS)

    Willmer, Christopher N. A.

    2018-06-01

    This paper presents a table with estimates of the absolute magnitude of the Sun and the conversions from vegamag to the AB and ST systems for several wide-band filters used in ground-based and space-based observatories. These estimates use the dustless spectral energy distribution (SED) of Vega, calibrated absolutely using the SED of Sirius, to set the vegamag zero-points and a composite spectrum of the Sun that coadds space-based observations from the ultraviolet to the near-infrared with models of the Solar atmosphere. The uncertainty of the absolute magnitudes is estimated by comparing the synthetic colors with photometric measurements of solar analogs and is found to be ∼0.02 mag. Combined with the uncertainty of ∼2% in the calibration of the Vega SED, the errors of these absolute magnitudes are ∼3%–4%. Using these SEDs, for three of the most utilized filters in extragalactic work the estimated absolute magnitudes of the Sun are M B = 5.44, M V = 4.81, and M K = 3.27 mag in the vegamag system and M B = 5.31, M V = 4.80, and M K = 5.08 mag in AB.

  2. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of cases with an improper payment (both over and under payments), expressed as the total number of cases in...

  3. Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)

    NASA Technical Reports Server (NTRS)

    Geist, J.

    1972-01-01

    A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.

  4. Sensitivity of feedforward neural networks to weight errors

    NASA Technical Reports Server (NTRS)

    Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney

    1990-01-01

    An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).

  5. Absolute Parameters for the F-type Eclipsing Binary BW Aquarii

    NASA Astrophysics Data System (ADS)

    Maxted, P. F. L.

    2018-05-01

    BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.

  6. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  7. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  8. Preliminary Error Budget for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; Gubbels, Timothy; Barnes, Robert

    2011-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements. The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those in the IPCC Report. A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO Project will implement a spaceborne earth observation mission designed to provide rigorous SI traceable observations (i.e., radiance, reflectance, and refractivity) that are sensitive to a wide range of key decadal change variables, including: 1) Surface temperature and atmospheric temperature profile 2) Atmospheric water vapor profile 3) Far infrared water vapor greenhouse 4) Aerosol properties and anthropogenic aerosol direct radiative forcing 5) Total and spectral solar

  9. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  10. Landsat-7 ETM+ radiometric stability and absolute calibration

    USGS Publications Warehouse

    Markham, B.L.; Barker, J.L.; Barsi, J.A.; Kaita, E.; Thome, K.J.; Helder, D.L.; Palluconi, Frank Don; Schott, J.R.; Scaramuzza, Pat; ,

    2002-01-01

    Launched in April 1999, the Landsat-7 ETM+ instrument is in its fourth year of operation. The quality of the acquired calibrated imagery continues to be high, especially with respect to its three most important radiometric performance parameters: reflective band instrument stability to better than ??1%, reflective band absolute calibration to better than ??5%, and thermal band absolute calibration to better than ??0.6 K. The ETM+ instrument has been the most stable of any of the Landsat instruments, in both the reflective and thermal channels. To date, the best on-board calibration source for the reflective bands has been the Full Aperture Solar Calibrator, which has indicated changes of at most -1.8% to -2.0% (95% C.I.) change per year in the ETM+ gain (band 4). However, this change is believed to be caused by changes in the solar diffuser panel, as opposed to a change in the instrument's gain. This belief is based partially on ground observations, which bound the changes in gain in band 4 at -0.7% to +1.5%. Also, ETM+ stability is indicated by the monitoring of desert targets. These image-based results for four Saharan and Arabian sites, for a collection of 35 scenes over the three years since launch, bound the gain change at -0.7% to +0.5% in band 4. Thermal calibration from ground observations revealed an offset error of +0.31 W/m 2 sr um soon after launch. This offset was corrected within the U. S. ground processing system at EROS Data Center on 21-Dec-00, and since then, the band 6 on-board calibration has indicated changes of at most +0.02% to +0.04% (95% C.I.) per year. The latest ground observations have detected no remaining offset error with an RMS error of ??0.6 K. The stability and absolute calibration of the Landsat-7 ETM+ sensor make it an ideal candidate to be used as a reference source for radiometric cross-calibrating to other land remote sensing satellite systems.

  11. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  12. Spinal intra-operative three-dimensional navigation with infra-red tool tracking: correlation between clinical and absolute engineering accuracy

    NASA Astrophysics Data System (ADS)

    Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.

  13. Absolute Plate Velocities from Seismic Anisotropy: Importance of Correlated Errors

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Zheng, L.; Kreemer, C.

    2014-12-01

    The orientation of seismic anisotropy inferred beneath the interiors of plates may provide a means to estimate the motions of the plate relative to the deeper mantle. Here we analyze a global set of shear-wave splitting data to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. The errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11º Ma-1 (95% confidence limits) right-handed about 57.1ºS, 68.6ºE. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2°) differs insignificantly from that for continental lithosphere (σ=21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4°) than for continental lithosphere (σ=14.7°). Two of the slowest-moving plates, Antarctica (vRMS=4 mm a-1, σ=29°) and Eurasia (vRMS=3 mm a-1, σ=33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈5 mm a-1 to result in seismic anisotropy useful for estimating plate motion.

  14. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  15. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  16. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  17. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    PubMed Central

    Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou

    2013-01-01

    Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491

  18. The input ambiguity hypothesis and case blindness: an account of cross-linguistic and intra-linguistic differences in case errors.

    PubMed

    Pelham, Sabra D

    2011-03-01

    English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.

  19. Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns.

    PubMed

    Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P

    1997-11-01

    In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.

  20. Corsica: A Multi-Mission Absolute Calibration Site

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.

    2013-09-01

    In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.

  1. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    PubMed

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  2. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    PubMed Central

    Kwon, Heon-Ju; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-01-01

    Background/Aims Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP) was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane. PMID:28759989

  3. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    PubMed Central

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  4. Absolute angular encoder based on optical diffraction

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Zhou, Tingting; Yuan, Bo; Wang, Liqiang

    2015-08-01

    A new encoding method for absolute angular encoder based on optical diffraction was proposed in the present study. In this method, an encoder disc is specially designed that a series of elements are uniformly spaced in one circle and each element is consisted of four diffraction gratings, which are tilted in the directions of 30°, 60°, -60° and -30°, respectively. The disc is illuminated by a coherent light and the diffractive signals are received. The positions of diffractive spots are used for absolute encoding and their intensities are for subdivision, which is different from the traditional optical encoder based on transparent/opaque binary principle. Since the track's width in the disc is not limited in the diffraction pattern, it provides a new way to solve the contradiction between the size and resolution, which is good for minimization of encoder. According to the proposed principle, the diffraction pattern disc with a diameter of 40 mm was made by lithography in the glass substrate. A prototype of absolute angular encoder with a resolution of 20" was built up. Its maximum error was tested as 78" by comparing with a small angle measuring system based on laser beam deflection.

  5. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    NASA Astrophysics Data System (ADS)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  6. Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System

    NASA Technical Reports Server (NTRS)

    Pfenninger, W. Matthew; Papen, George C.

    1992-01-01

    Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.

  7. A review on Black-Scholes model in pricing warrants in Bursa Malaysia

    NASA Astrophysics Data System (ADS)

    Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul

    2017-01-01

    This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.

  8. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    NASA Astrophysics Data System (ADS)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

  9. A Model of Self-Monitoring Blood Glucose Measurement Error.

    PubMed

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  10. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    PubMed

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Automated absolute phase retrieval in across-track interferometry

    NASA Technical Reports Server (NTRS)

    Madsen, Soren N.; Zebker, Howard A.

    1992-01-01

    Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.

  12. An absolute cavity pyrgeometer to measure the absolute outdoor longwave irradiance with traceability to international system of units, SI

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Zeng, Jinan; Scheuch, Jonathan; Hanssen, Leonard; Wilthan, Boris; Myers, Daryl; Stoffel, Tom

    2012-03-01

    This article describes a method of measuring the absolute outdoor longwave irradiance using an absolute cavity pyrgeometer (ACP), U.S. Patent application no. 13/049, 275. The ACP consists of domeless thermopile pyrgeometer, gold-plated concentrator, temperature controller, and data acquisition. The dome was removed from the pyrgeometer to remove errors associated with dome transmittance and the dome correction factor. To avoid thermal convection and wind effect errors resulting from using a domeless thermopile, the gold-plated concentrator was placed above the thermopile. The concentrator is a dual compound parabolic concentrator (CPC) with 180° view angle to measure the outdoor incoming longwave irradiance from the atmosphere. The incoming irradiance is reflected from the specular gold surface of the CPC and concentrated on the 11 mm diameter of the pyrgeometer's blackened thermopile. The CPC's interior surface design and the resulting cavitation result in a throughput value that was characterized by the National Institute of Standards and Technology. The ACP was installed horizontally outdoor on an aluminum plate connected to the temperature controller to control the pyrgeometer's case temperature. The responsivity of the pyrgeometer's thermopile detector was determined by lowering the case temperature and calculating the rate of change of the thermopile output voltage versus the changing net irradiance. The responsivity is then used to calculate the absolute atmospheric longwave irradiance with an uncertainty estimate (U95) of ±3.96 W m-2 with traceability to the International System of Units, SI. The measured irradiance was compared with the irradiance measured by two pyrgeometers calibrated by the World Radiation Center with traceability to the Interim World Infrared Standard Group, WISG. A total of 408 readings were collected over three different nights. The calculated irradiance measured by the ACP was 1.5 W/m2 lower than that measured by the two

  13. Guidance for deriving and presenting percentage study weights in meta-analysis of test accuracy studies.

    PubMed

    Burke, Danielle L; Ensor, Joie; Snell, Kym I E; van der Windt, Danielle; Riley, Richard D

    2018-06-01

    Percentage study weights in meta-analysis reveal the contribution of each study toward the overall summary results and are especially important when some studies are considered outliers or at high risk of bias. In meta-analyses of test accuracy reviews, such as a bivariate meta-analysis of sensitivity and specificity, the percentage study weights are not currently derived. Rather, the focus is on representing the precision of study estimates on receiver operating characteristic plots by scaling the points relative to the study sample size or to their standard error. In this article, we recommend that researchers should also provide the percentage study weights directly, and we propose a method to derive them based on a decomposition of Fisher information matrix. This method also generalises to a bivariate meta-regression so that percentage study weights can also be derived for estimates of study-level modifiers of test accuracy. Application is made to two meta-analyses examining test accuracy: one of ear temperature for diagnosis of fever in children and the other of positron emission tomography for diagnosis of Alzheimer's disease. These highlight that the percentage study weights provide important information that is otherwise hidden if the presentation only focuses on precision based on sample size or standard errors. Software code is provided for Stata, and we suggest that our proposed percentage weights should be routinely added on forest and receiver operating characteristic plots for sensitivity and specificity, to provide transparency of the contribution of each study toward the results. This has implications for the PRISMA-diagnostic test accuracy guidelines that are currently being produced. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Increased Regulatory T-Cell Percentage Contributes to Poor CD4(+) Lymphocytes Recovery: A 2-Year Prospective Study After Introduction of Antiretroviral Therapy.

    PubMed

    Saison, Julien; Maucort Boulch, Delphine; Chidiac, Christian; Demaret, Julie; Malcus, Christophe; Cotte, Laurent; Poitevin-Later, Francoise; Miailhes, Patrick; Venet, Fabienne; Trabaud, Mary Anne; Monneret, Guillaume; Ferry, Tristan

    2015-04-01

    Background.  The primary aim of this study was to determine the impact of regulatory T cells (Tregs) percentage on immune recovery in human immunodeficiency virus (HIV)-infected patients after antiretroviral therapy introduction. Methods.  A 2-year prospective study was conducted in HIV-1 chronically infected naive patients with CD4 count <500 cells/mm(3). Regulatory T cells were identified as CD4(+)CD25(high)CD127(low) cells among CD4(+) lymphocytes. Effect of Treg percentage at inclusion on CD4 evolution overtime was analyzed using a mixed-effect Poisson regression for count data. Results.  Fifty-eight patients were included (median CD4 = 293/mm(3), median Treg percentage = 6.1%). Percentage of Treg at baseline and CD4 nadir were independently related to the evolution of CD4 absolute value according to time: (1) at any given nadir CD4 count, 1% increase of initial Treg was associated with a 1.9% lower CD4 absolute value at month 24; (2) at any given Treg percentage at baseline, 10 cell/mm(3) increase of CD4 nadir was associated with a 2.4% increase of CD4 at month 24; and (3) both effects did not attenuate with time. The effect of Treg at baseline on CD4 evolution was as low as the CD4 nadir was high. Conclusions.  Regulatory T-cell percentage at baseline is a strong independent prognostic factor of immune recovery, particularly among patients with low CD4 nadir.

  15. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch.

    PubMed

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A; Larson, Charles R

    2014-02-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Spatial and temporal variability of the overall error of National Atmospheric Deposition Program measurements determined by the USGS collocated-sampler program, water years 1989-2001

    USGS Publications Warehouse

    Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.

    2005-01-01

    Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.

  17. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  18. Changes in relative and absolute concentrations of plasma phospholipid fatty acids observed in a randomized trial of Omega-3 fatty acids supplementation in Uganda.

    PubMed

    Song, Xiaoling; Diep, Pho; Schenk, Jeannette M; Casper, Corey; Orem, Jackson; Makhoul, Zeina; Lampe, Johanna W; Neuhouser, Marian L

    2016-11-01

    Expressing circulating phospholipid fatty acids (PLFAs) in relative concentrations has some limitations: the total of all fatty acids are summed to 100%; therefore, the values of individual fatty acid are not independent. In this study we examined if both relative and absolute metrics could effectively measure changes in circulating PLFA concentrations in an intervention trial. 66 HIV and HHV8 infected patients in Uganda were randomized to take 3g/d of either long-chain omega-3 fatty acids (1856mg EPA and 1232mg DHA) or high-oleic safflower oil in a 12-week double-blind trial. Plasma samples were collected at baseline and end of trial. Relative weight percentage and absolute concentrations of 41 plasma PLFAs were measured using gas chromatography. Total cholesterol was also measured. Intervention-effect changes in concentrations were calculated as differences between end of 12-week trial and baseline. Pearson correlations of relative and absolute concentration changes in individual PLFAs were high (>0.6) for 37 of the 41 PLFAs analyzed. In the intervention arm, 17 PLFAs changed significantly in relative concentration and 16 in absolute concentration, 15 of which were identical. Absolute concentration of total PLFAs decreased 95.1mg/L (95% CI: 26.0, 164.2; P=0.0085), but total cholesterol did not change significantly in the intervention arm. No significant change was observed in any of the measurements in the placebo arm. Both relative weight percentage and absolute concentrations could effectively measure changes in plasma PLFA concentrations. EPA and DHA supplementation changes the concentrations of multiple plasma PLFAs besides EPA and DHA.Both relative weight percentage and absolute concentrations could effectively measure changes in plasma phospholipid fatty acid (PLFA) concentrations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Absolute plate velocities from seismic anisotropy: Importance of correlated errors

    NASA Astrophysics Data System (ADS)

    Zheng, Lin; Gordon, Richard G.; Kreemer, Corné

    2014-09-01

    The errors in plate motion azimuths inferred from shear wave splitting beneath any one tectonic plate are shown to be correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. Our preferred set of angular velocities, SKS-MORVEL, is determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25 ± 0.11° Ma-1 (95% confidence limits) right handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ = 19.2°) differs insignificantly from that for continental lithosphere (σ = 21.6°). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ = 7.4°) than for continental lithosphere (σ = 14.7°). Two of the slowest-moving plates, Antarctica (vRMS = 4 mm a-1, σ = 29°) and Eurasia (vRMS = 3 mm a-1, σ = 33°), have two of the largest within-plate dispersions, which may indicate that a plate must move faster than ≈ 5 mm a-1 to result in seismic anisotropy useful for estimating plate motion. The tendency of observed azimuths on the Arabia plate to be counterclockwise of plate motion may provide information about the direction and amplitude of superposed asthenospheric flow or about anisotropy in the lithospheric mantle.

  20. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  1. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    DTIC Science & Technology

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  2. [Errors in Peruvian medical journals references].

    PubMed

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  3. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  4. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    USGS Publications Warehouse

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  5. Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools

    NASA Astrophysics Data System (ADS)

    Radziukynas, V.; Klementavičius, A.

    2016-04-01

    The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).

  6. Network Adjustment of Orbit Errors in SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Bahr, Hermann; Hanssen, Ramon

    2010-03-01

    Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.

  7. Relevant reduction effect with a modified thermoplastic mask of rotational error for glottic cancer in IMRT

    NASA Astrophysics Data System (ADS)

    Jung, Jae Hong; Jung, Joo-Young; Cho, Kwang Hwan; Ryu, Mi Ryeong; Bae, Sun Hyun; Moon, Seong Kwon; Kim, Yong Ho; Choe, Bo-Young; Suh, Tae Suk

    2017-02-01

    The purpose of this study was to analyze the glottis rotational error (GRE) by using a thermoplastic mask for patients with the glottic cancer undergoing intensity-modulated radiation therapy (IMRT). We selected 20 patients with glottic cancer who had received IMRT by using the tomotherapy. The image modalities with both kilovoltage computed tomography (planning kVCT) and megavoltage CT (daily MVCT) images were used for evaluating the error. Six anatomical landmarks in the image were defined to evaluate a correlation between the absolute GRE (°) and the length of contact with the underlying skin of the patient by the mask (mask, mm). We also statistically analyzed the results by using the Pearson's correlation coefficient and a linear regression analysis ( P <0.05). The mask and the absolute GRE were verified to have a statistical correlation ( P < 0.01). We found a statistical significance for each parameter in the linear regression analysis (mask versus absolute roll: P = 0.004 [ P < 0.05]; mask versus 3D-error: P = 0.000 [ P < 0.05]). The range of the 3D-errors with contact by the mask was from 1.2% - 39.7% between the maximumand no-contact case in this study. A thermoplastic mask with a tight, increased contact area may possibly contribute to the uncertainty of the reproducibility as a variation of the absolute GRE. Thus, we suggest that a modified mask, such as one that covers only the glottis area, can significantly reduce the patients' setup errors during the treatment.

  8. Error analysis on spinal motion measurement using skin mounted sensors.

    PubMed

    Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond

    2008-01-01

    Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.

  9. Anomalous annealing of floating gate errors due to heavy ion irradiation

    NASA Astrophysics Data System (ADS)

    Yin, Yanan; Liu, Jie; Sun, Youmei; Hou, Mingdong; Liu, Tianqi; Ye, Bing; Ji, Qinggang; Luo, Jie; Zhao, Peixiong

    2018-03-01

    Using the heavy ions provided by the Heavy Ion Research Facility in Lanzhou (HIRFL), the annealing of heavy-ion induced floating gate (FG) errors in 34 nm and 25 nm NAND Flash memories has been studied. The single event upset (SEU) cross section of FG and the evolution of the errors after irradiation depending on the ion linear energy transfer (LET) values, data pattern and feature size of the device are presented. Different rates of annealing for different ion LET and different pattern are observed in 34 nm and 25 nm memories. The variation of the percentage of different error patterns in 34 nm and 25 nm memories with annealing time shows that the annealing of FG errors induced by heavy-ion in memories will mainly take place in the cells directly hit under low LET ion exposure and other cells affected by heavy ions when the ion LET is higher. The influence of Multiple Cell Upsets (MCUs) on the annealing of FG errors is analyzed. MCUs with high error multiplicity which account for the majority of the errors can induce a large percentage of annealed errors.

  10. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    NASA Astrophysics Data System (ADS)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  11. Teaching Absolute Value Meaningfully

    ERIC Educational Resources Information Center

    Wade, Angela

    2012-01-01

    What is the meaning of absolute value? And why do teachers teach students how to solve absolute value equations? Absolute value is a concept introduced in first-year algebra and then reinforced in later courses. Various authors have suggested instructional methods for teaching absolute value to high school students (Wei 2005; Stallings-Roberts…

  12. Percentages and Milk Fat

    ERIC Educational Resources Information Center

    Bu, Lingguo; Marjanovich, Angel

    2017-01-01

    Percentages have proven to be a challenging concept in school mathematics. At the surface, a percentage is merely a rational number, representing a ratio between a number and 100. At the conceptual core, however, a percentage is sensitive to the context, making sense with respect to a network of related quantities. In the effort of the authors to…

  13. Wechsler Adult Intelligence Scale-Revised Block Design broken configuration errors in nonpenetrating traumatic brain injury.

    PubMed

    Wilde, M C; Boake, C; Sherer, M

    2000-01-01

    Final broken configuration errors on the Wechsler Adult Intelligence Scale-Revised (WAIS-R; Wechsler, 1981) Block Design subtest were examined in 50 moderate and severe nonpenetrating traumatically brain injured adults. Patients were divided into left (n = 15) and right hemisphere (n = 19) groups based on a history of unilateral craniotomy for treatment of an intracranial lesion and were compared to a group with diffuse or negative brain CT scan findings and no history of neurosurgery (n = 16). The percentage of final broken configuration errors was related to injury severity, Benton Visual Form Discrimination Test (VFD; Benton, Hamsher, Varney, & Spreen, 1983) total score and the number of VFD rotation and peripheral errors. The percentage of final broken configuration errors was higher in the patients with right craniotomies than in the left or no craniotomy groups, which did not differ. Broken configuration errors did not occur more frequently on designs without an embedded grid pattern. Right craniotomy patients did not show a greater percentage of broken configuration errors on nongrid designs as compared to grid designs.

  14. The percentage of nosocomial-related out of total hospitalizations for rotavirus gastroenteritis and its association with hand hygiene compliance.

    PubMed

    Waisbourd-Zinman, Orith; Ben-Ziony, Shiri; Solter, Ester; Chodick, Gabriel; Ashkenazi, Shai; Livni, Gilat

    2011-03-01

    Because the absolute numbers of both community-acquired and nosocomial rotavirus gastroenteritis (RVGE) vary, we studied the percentage of hospitalizations for RVGE that were transmitted nosocomially as an indicator of in-hospital acquisition of the infection. In a 4-year prospective study, the percentage of nosocomial RVGE declined steadily, from 20.3% in 2003 to 12.7% in 2006 (P = .001). Concomitantly, the rate of compliance with hand hygiene increased from 33.7% to 49% (P = .012), with a significant (P < .0001) inverse association noted between the two trends. Copyright © 2011 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  15. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    NASA Astrophysics Data System (ADS)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  16. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  17. The Zero Product Principle Error.

    ERIC Educational Resources Information Center

    Padula, Janice

    1996-01-01

    Argues that the challenge for teachers of algebra in Australia is to find ways of making the structural aspects of algebra accessible to a greater percentage of students. Uses the zero product principle to provide an example of a common student error grounded in the difficulty of understanding the structure of algebra. (DDR)

  18. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  19. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Dromgoole, L; Alvarez, P

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious

  20. Absolute measurements of large mirrors

    NASA Astrophysics Data System (ADS)

    Su, Peng

    times the mirror under test in relation to the test system. The result was a separation of errors in the optical test system to those errors from the mirror under test. This method proved to be accurate to 12nm rms. Another absolute measurement technique discussed in this dissertation utilizes the property of a paraboloidal surface of reflecting rays parallel to its optical axis, to its focal point. We have developed a scanning pentaprism technique that exploits this geometry to measure off-axis paraboloidal mirrors such as the GMT segments. This technique was demonstrated on a 1.7 m diameter prototype and proved to have a precision of about 50 nm rms.

  1. Easy Absolute Values? Absolutely

    ERIC Educational Resources Information Center

    Taylor, Sharon E.; Mittag, Kathleen Cage

    2015-01-01

    The authors teach a problem-solving course for preservice middle-grades education majors that includes concepts dealing with absolute-value computations, equations, and inequalities. Many of these students like mathematics and plan to teach it, so they are adept at symbolic manipulations. Getting them to think differently about a concept that they…

  2. Visual symptoms associated with refractive errors among Thangka artists of Kathmandu valley.

    PubMed

    Dhungel, Deepa; Shrestha, Gauri Shankar

    2017-12-21

    Prolong near work, especially among people with uncorrected refractive error is considered a potential source of visual symptoms. The present study aims to determine the visual symptoms and the association of those with refractive errors among Thangka artists. In a descriptive cross-sectional study, 242 (46.1%) participants of 525 thangka artists examined, with age ranged between 16 years to 39 years which comprised of 112 participants with significant refractive errors and 130 absolutely emmetropic participants, were enrolled from six Thangka painting schools. The visual symptoms were assessed using a structured questionnaire consisting of nine items and scoring from 0 to 6 consecutive scales. The eye examination included detailed anterior and posterior segment examination, objective and subjective refraction, and assessment of heterophoria, vergence and accommodation. Symptoms were presented in percentage and median. Variation in distribution of participants and symptoms was analysed using the Kruskal Wallis test for mean, and the correlation with the Pearson correlation coefficient. A significance level of 0.05 was applied for 95% confidence interval. The majority of participants (65.1%) among refractive error group (REG) were above the age of 30 years, with a male predominance (61.6%), compared to the participants in the normal cohort group (NCG), where majority of them (72.3%) were below 30 years of age (72.3%) and female (51.5%). Overall, the visual symptoms are high among Thangka artists. However, blurred vision (p = 0.003) and dry eye (p = 0.004) are higher among the REG than the NCG. Females have slightly higher symptoms than males. Most of the symptoms, such as sore/aching eye (p = 0.003), feeling dry (p = 0.005) and blurred vision (p = 0.02) are significantly associated with astigmatism. Thangka artists present with significant proportion of refractive error and visual symptoms, especially among females. The most commonly reported

  3. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    PubMed

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  4. Comparing alchemical and physical pathway methods for computing the absolute binding free energy of charged ligands.

    PubMed

    Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald

    2018-06-13

    Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.

  5. Implementation of Automatic Clustering Algorithm and Fuzzy Time Series in Motorcycle Sales Forecasting

    NASA Astrophysics Data System (ADS)

    Rasim; Junaeti, E.; Wirantika, R.

    2018-01-01

    Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.

  6. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the

  7. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  8. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  9. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  10. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  11. 19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...

  12. Automated drug dispensing system reduces medication errors in an intensive care setting.

    PubMed

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; p<.05); however, no significant difference was observed before automated dispensing system implementation (20.4% and 19.3%, respectively; not significant). Before-and-after comparisons in the study unit also showed a significantly reduced percentage of total opportunities for error (20.4% and 13.5%; p<.01). An analysis of detailed opportunities for error showed a significant impact of the automated dispensing system in reducing preparation errors (p<.05). Most errors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2

  13. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    NASA Astrophysics Data System (ADS)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).

    We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.

    A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.

    Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.

    Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.

    The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  14. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    NASA Astrophysics Data System (ADS)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  15. Absolute Reliability and Concurrent Validity of Hand Held Dynamometry and Isokinetic Dynamometry in the Hip, Knee and Ankle Joint: Systematic Review and Meta-analysis

    PubMed Central

    Chamorro, Claudio; Armijo-Olivo, Susan; De la Fuente, Carlos; Fuentes, Javiera; Javier Chirosa, Luis

    2017-01-01

    Abstract The purpose of the study is to establish absolute reliability and concurrent validity between hand-held dynamometers (HHDs) and isokinetic dynamometers (IDs) in lower extremity peak torque assessment. Medline, Embase, CINAHL databases were searched for studies related to psychometric properties in muscle dynamometry. Studies considering standard error of measurement SEM (%) or limit of agreement LOA (%) expressed as percentage of the mean, were considered to establish absolute reliability while studies using intra-class correlation coefficient (ICC) were considered to establish concurrent validity between dynamometers. In total, 17 studies were included in the meta-analysis. The COSMIN checklist classified them between fair and poor. Using HHDs, knee extension LOA (%) was 33.59%, 95% confidence interval (CI) 23.91 to 43.26 and ankle plantar flexion LOA (%) was 48.87%, CI 35.19 to 62.56. Using IDs, hip adduction and extension; knee flexion and extension; and ankle dorsiflexion showed LOA (%) under 15%. Lower hip, knee, and ankle LOA (%) were obtained using an ID compared to HHD. ICC between devices ranged between 0.62, CI (0.37 to 0.87) for ankle dorsiflexion to 0.94, IC (0.91to 0.98) for hip adduction. Very high correlation were found for hip adductors and hip flexors and moderate correlations for knee flexors/extensors and ankle plantar/dorsiflexors. PMID:29071305

  16. Absolute biological needs.

    PubMed

    McLeod, Stephen

    2014-07-01

    Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. © 2013 John Wiley & Sons Ltd.

  17. The Absolute Proper Motion of NGC 6397 Revisited

    NASA Astrophysics Data System (ADS)

    Rees, Richard; Cudworth, Kyle

    2018-01-01

    We compare several determinations of the absolute proper motion of the Galactic globular cluster NGC 6397: (1) our own determination relative to field stars derived from scans of 38 photographic plates spanning 97 years in epoch; (2) using our proper motion membership to identify cluster stars in various catalogs in the literature (UCAC4, UCAC5, PPMXL, HSOY, Tycho-2, Hipparcos, TGAS); (3) published results from the Yale SPM Program (both tied to Hipparcos and relative to galaxies) and two from HST observations relative to galaxies. The various determinations are not in good agreement. Curiously, the Yale SPM relative to galaxies does not agree with the HST determinations, and the individual HST error ellipses are close to each other but do not overlap. The Yale SPM relative to galaxies does agree with our determination, Tycho-2, and the Yale SPM tied to Hipparcos. It is not clear which of the current determinations is most reliable; we have found evidence of systematic errors in some of them (including one of the HST determinations). This research has been partially supported by the NSF.

  18. The JILA (Joint Institute for Laboratory Astrophysics) portable absolute gravity apparatus

    NASA Astrophysics Data System (ADS)

    Faller, J. E.; Guo, Y. G.; Gschwind, J.; Niebauer, T. M.; Rinker, R. L.; Xue, J.

    1983-08-01

    We have developed a new and highly portable absolute gravity apparatus based on the principles of free-fall laser interferometry. A primary concern over the past several years has been the detection, understanding, and elimination of systematic errors. In the Spring of 1982, we used this instrument to carry out a survey at twelve sites in the United States. Over a period of eight weeks, the instrument was driven a distance of nearly 20,000 km to sites in California, New Mexico, Colorado, Wyoming, Maryland, and Massachusetts. The time required to carry out a measurement at each location was typically one day. Over the next several years, our intention is to see absolute gravity measurements become both usable and used in the field. To this end, and in the context of cooperative research programs with a number of scientific institutes throughout the world, we are building additional instruments (incorporating further refinements) which are to be used for geodetic, geophysical, geological, and tectonic studies. With these new instruments we expect to improve (perhaps by a factor of two) on the 6-10 microgal accuracy of our present instrument. Today, one can make absolutely gravity measurements as accurately as - possibly even more accurately than - one can make relative measurements. Given reasonable success with the new instruments in the field, the last years of this century should see absolute gravity measurement mature both as a new geodetic data type and as a useful geophysical tool.

  19. Percentage Problems in Bridging Courses

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    Research on teaching high school mathematics shows that the topic of percentages often causes learning difficulties. This article describes a method of teaching percentages that the authors used in university bridging courses. In this method, the information from a word problem about percentages is presented in a two-way table. Such a table gives…

  20. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  1. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  2. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    PubMed

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  3. Measures of model performance based on the log accuracy ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  4. Measures of model performance based on the log accuracy ratio

    DOE PAGES

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    2018-01-03

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  5. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    PubMed

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (p<0.001; chi(2) test). MAEs occurred in 7.0% of 1473 non-intravenous doses pre-intervention and 4.3% of 1139 afterwards (p = 0.005; chi(2) test). Patient identity was not checked for 82.6% of 1344 doses pre-intervention and 18.9% of 1291 afterwards (p<0.001; chi(2) test). Medical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  6. Alcohol consumption, beverage prices and measurement error.

    PubMed

    Young, Douglas J; Bielinska-Kwapisz, Agnieszka

    2003-03-01

    Alcohol price data collected by the American Chamber of Commerce Researchers Association (ACCRA) have been widely used in studies of alcohol consumption and related behaviors. A number of problems with these data suggest that they contain substantial measurement error, which biases conventional statistical estimators toward a finding of little or no effect of prices on behavior. We test for measurement error, assess the magnitude of the bias and provide an alternative estimator that is likely to be superior. The study utilizes data on per capita alcohol consumption across U.S. states and the years 1982-1997. State and federal alcohol taxes are used as instrumental variables for prices. Formal tests strongly confim the hypothesis of measurement error. Instrumental variable estimates of the price elasticity of demand range from -0.53 to -1.24. These estimates are substantially larger in absolute value than ordinary least squares estimates, which sometimes are not significantly different from zero or even positive. The ACCRA price data are substantially contaminated with measurement error, but using state and federal taxes as instrumental variables mitigates the problem.

  7. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  8. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  9. Jasminum flexile flower absolute from India--a detailed comparison with three other jasmine absolutes.

    PubMed

    Braun, Norbert A; Kohlenberg, Birgit; Sim, Sherina; Meier, Manfred; Hammerschmidt, Franz-Josef

    2009-09-01

    Jasminum flexile flower absolute from the south of India and the corresponding vacuum headspace (VHS) sample of the absolute were analyzed using GC and GC-MS. Three other commercially available Indian jasmine absolutes from the species: J. sambac, J. officinale subsp. grandiflorum, and J. auriculatum and the respective VHS samples were used for comparison purposes. One hundred and twenty-one compounds were characterized in J. flexile flower absolute, with methyl linolate, benzyl salicylate, benzyl benzoate, (2E,6E)-farnesol, and benzyl acetate as the main constituents. A detailed olfactory evaluation was also performed.

  10. Pixel-based absolute surface metrology by three flat test with shifted and rotated maps

    NASA Astrophysics Data System (ADS)

    Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang

    2018-03-01

    In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.

  11. Numerical model estimating the capabilities and limitations of the fast Fourier transform technique in absolute interferometry

    NASA Astrophysics Data System (ADS)

    Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.

    1996-05-01

    A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.

  12. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  13. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  14. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  15. Changes in relative and absolute concentrations of plasma phospholipid fatty acids observed in a randomized trial of Omega-3 fatty acids supplementation in Uganda

    PubMed Central

    Song, Xiaoling; Diep, Pho; Schenk, Jeannette M; Casper, Corey; Orem, Jackson; Makhoul, Zeina; Lampe, Johanna W; Neuhouser, Marian L.

    2016-01-01

    Expressing circulating phospholipid fatty acids (PLFAs) in relative concentrations has some limitations: the total of all fatty acids are summed to 100%; therefore, the values of individual fatty acid are not independent. In this study we examined if both relative and absolute metrics could effectively measure changes in circulating PLFA concentrations in an intervention trial. 66 HIV and HHV8 infected patients in Uganda were randomized to take 3g/d of either long-chain omega-3 fatty acids (1,856 mg EPA and 1,232 mg DHA) or high—oleic safflower oil in a 12-week double-blind trial. Plasma samples were collected at baseline and end of trial. Relative weight percentage and absolute concentrations of 41 plasma PLFAs were measured using gas chromatography. Total cholesterol was also measured. Intervention-effect changes in concentrations were calculated as differences between end of 12-week trial and baseline. Pearson correlations of relative and absolute concentration changes in individual PLFAs were high (>0.6) for 37 of the 41 PLFAs analyzed. In the intervention arm, 17 PLFAs changed significantly in relative concentration and 16 in absolute concentration, 15 of which were identical. Absolute concentration of total PLFAs decreased 95.1 mg/L (95% CI: 26.0, 164.2; P = 0.0085), but total cholesterol did not change significantly in the intervention arm. No significant change was observed in any of the measurements in the placebo arm. Both relative weight percentage and absolute concentrations could effectively measure changes in plasma PLFA concentrations. EPA and DHA supplementation changes the concentrations of multiple plasma PLFAs besides EPA and DHA. PMID:27926458

  16. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    PubMed Central

    Tan, Lilong; Yan, Shuhua

    2018-01-01

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897

  17. Alteration of the number and percentage of innate immune cells in preschool children from an e-waste recycling area.

    PubMed

    Zhang, Yu; Xu, Xijin; Sun, Di; Cao, Junjun; Zhang, Yuling; Huo, Xia

    2017-11-01

    Heavy metal lead (Pb) and cadmium (Cd) are widespread environmental contaminants and exert detrimental effects on the immune system. We evaluated the association between Pb/Cd exposures and innate immune cells in children from an electronic waste (e-waste) recycling area. A total number of 294 preschool children were recruited, including 153 children from Guiyu (e-waste exposed group), and 141 from Haojiang (reference group). Pb and Cd levels in peripheral blood were measured by graphite furnace atomic absorption spectrophotometer, NK cell percentages were detected by flow cytometer, and other innate immune cells including monocytes, eosinophils, neutrophils and basophils were immediately measured by automated hematology analyzer. Results showed children in Guiyu had significantly higher Pb and Cd levels than in reference group. Absolute counts of monocytes, eosinophils, neutrophils and basophils, as well as percentages of eosinophils and neutrophils were significantly higher in the Guiyu group. In contrast, NK cell percentages were significantly lower in Guiyu group. Pb elicited significant escalation in counts of monocytes, eosinophils and basophils, as well as percentages of monocytes, but decline in percentages of neutrophils in different quintiles with respect to the first quintile of Pb concentrations. Cd induced significant increase in counts and percentages of neutrophils in the highest quintile compared with the first quintile of Cd concentrations. We concluded alteration of the number and percentage of innate immune cells are linked to higher levels of Pb and Cd, which indicates Pb and Cd exposures might affect the innate and adaptive immune response in Guiyu children. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. "First, know thyself": cognition and error in medicine.

    PubMed

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  19. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  20. Ultraspectral sounding retrieval error budget and estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larrabee L.; Yang, Ping

    2011-11-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI).

  1. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  2. Determination of heat capacity of ionic liquid based nanofluids using group method of data handling technique

    NASA Astrophysics Data System (ADS)

    Sadi, Maryam

    2018-01-01

    In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.

  3. Absolute Plate Velocities from Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Kreemer, Corné; Zheng, Lin; Gordon, Richard

    2015-04-01

    The orientation of seismic anisotropy inferred beneath plate interiors may provide a means to estimate the motions of the plate relative to the sub-asthenospheric mantle. Here we analyze two global sets of shear-wave splitting data, that of Kreemer [2009] and an updated and expanded data set, to estimate plate motions and to better understand the dispersion of the data, correlations in the errors, and their relation to plate speed. We also explore the effect of using geologically current plate velocities (i.e., the MORVEL set of angular velocities [DeMets et al. 2010]) compared with geodetically current plate velocities (i.e., the GSRM v1.2 angular velocities [Kreemer et al. 2014]). We demonstrate that the errors in plate motion azimuths inferred from shear-wave splitting beneath any one tectonic plate are correlated with the errors of other azimuths from the same plate. To account for these correlations, we adopt a two-tier analysis: First, find the pole of rotation and confidence limits for each plate individually. Second, solve for the best fit to these poles while constraining relative plate angular velocities to consistency with the MORVEL relative plate angular velocities. The SKS-MORVEL absolute plate angular velocities (based on the Kreemer [2009] data set) are determined from the poles from eight plates weighted proportionally to the root-mean-square velocity of each plate. SKS-MORVEL indicates that eight plates (Amur, Antarctica, Caribbean, Eurasia, Lwandle, Somalia, Sundaland, and Yangtze) have angular velocities that differ insignificantly from zero. The net rotation of the lithosphere is 0.25±0.11° Ma-1 (95% confidence limits) right-handed about 57.1°S, 68.6°E. The within-plate dispersion of seismic anisotropy for oceanic lithosphere (σ=19.2° ) differs insignificantly from that for continental lithosphere (σ=21.6° ). The between-plate dispersion, however, is significantly smaller for oceanic lithosphere (σ=7.4° ) than for continental

  4. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  5. 75 FR 20813 - Certain Magnesia Carbon Bricks from the People's Republic of China: Amended Preliminary...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... other errors, would result in (1) a change of at least five absolute percentage points in, but not less...) preliminary determination, or (2) a difference between a weighted-average dumping margin of zero or de minimis...

  6. Evaluation of centroiding algorithm error for Nano-JASMINE

    NASA Astrophysics Data System (ADS)

    Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki

    2014-08-01

    The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.

  7. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  8. Self-digitization microfluidic chip for absolute quantification of mRNA in single cells.

    PubMed

    Thompson, Alison M; Gansen, Alexander; Paguirigan, Amy L; Kreutz, Jason E; Radich, Jerald P; Chiu, Daniel T

    2014-12-16

    Quantification of mRNA in single cells provides direct insight into how intercellular heterogeneity plays a role in disease progression and outcomes. Quantitative polymerase chain reaction (qPCR), the current gold standard for evaluating gene expression, is insufficient for providing absolute measurement of single-cell mRNA transcript abundance. Challenges include difficulties in handling small sample volumes and the high variability in measurements. Microfluidic digital PCR provides far better sensitivity for minute quantities of genetic material, but the typical format of this assay does not allow for counting of the absolute number of mRNA transcripts samples taken from single cells. Furthermore, a large fraction of the sample is often lost during sample handling in microfluidic digital PCR. Here, we report the absolute quantification of single-cell mRNA transcripts by digital, one-step reverse transcription PCR in a simple microfluidic array device called the self-digitization (SD) chip. By performing the reverse transcription step in digitized volumes, we find that the assay exhibits a linear signal across a wide range of total RNA concentrations and agrees well with standard curve qPCR. The SD chip is found to digitize a high percentage (86.7%) of the sample for single-cell experiments. Moreover, quantification of transferrin receptor mRNA in single cells agrees well with single-molecule fluorescence in situ hybridization experiments. The SD platform for absolute quantification of single-cell mRNA can be optimized for other genes and may be useful as an independent control method for the validation of mRNA quantification techniques.

  9. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    PubMed Central

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  10. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-05-23

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.

  11. Instrumentation and First Results of the Reflected Solar Demonstration System for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.

  12. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    PubMed Central

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (p<0.001; χ2 test). MAEs occurred in 7.0% of 1473 non‐intravenous doses pre‐intervention and 4.3% of 1139 afterwards (p = 0.005; χ2 test). Patient identity was not checked for 82.6% of 1344 doses pre‐intervention and 18.9% of 1291 afterwards (p<0.001; χ2 test). Medical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before

  13. Absolute Timing of the Crab Pulsar with RXTE

    NASA Technical Reports Server (NTRS)

    Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.

    2004-01-01

    We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.

  14. Algebra Students' Difficulty with Fractions: An Error Analysis

    ERIC Educational Resources Information Center

    Brown, George; Quinn, Robert J.

    2006-01-01

    An analysis of the 1990 National Assessment of Educational Progress (NAEP) found that only 46 percent of all high school seniors demonstrated success with a grasp of decimals, percentages, fractions and simple algebra. This article investigates error patterns that emerge as students attempt to answer questions involving the ability to apply…

  15. 75 FR 29972 - Certain Seamless Carbon and Alloy Steel Standard, Line, and Pressure Pipe from the People's...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... errors, (1) would result in a change of at least five absolute percentage points in, but not less than 25... determination; or (2) would result in a difference between a weighted-average dumping margin of zero or de...

  16. 7 CFR 868.308 - Percentages.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Principles Governing... in U.S. Nos. 1 and 2 Milled Rice and the percentage of objectionable seeds in U.S. No. 1 Brewers Milled Rice is reported to the nearest hundredth percent. The percentages of all other factors are...

  17. 7 CFR 868.308 - Percentages.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Milled Rice Principles Governing... in U.S. Nos. 1 and 2 Milled Rice and the percentage of objectionable seeds in U.S. No. 1 Brewers Milled Rice is reported to the nearest hundredth percent. The percentages of all other factors are...

  18. Estimating the absolute wealth of households.

    PubMed

    Hruschka, Daniel J; Gerkey, Drew; Hadley, Craig

    2015-07-01

    To estimate the absolute wealth of households using data from demographic and health surveys. We developed a new metric, the absolute wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated absolute wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of absolute versus relative wealth estimates for the prediction of anthropometric measures. The median absolute wealth estimates of 1,403,186 households were 2056 international dollars per capita (interquartile range: 723-6103). The proportion of poor households based on absolute wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R(2)  = 0.84). Absolute wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Absolute wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality.

  19. Solving Problems with the Percentage Bar

    ERIC Educational Resources Information Center

    van Galen, Frans; van Eerde, Dolly

    2013-01-01

    At the end of primary school all children more of less know what a percentage is, but yet they often struggle with percentage problems. This article describes a study in which students of 13 and 14 years old were given a written test with percentage problems and a week later were interviewed about the way they solved some of these problems. In a…

  20. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    PubMed

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  1. Absolute dose calculations for Monte Carlo simulations of radiotherapy beams

    NASA Astrophysics Data System (ADS)

    Popescu, I. A.; Shaw, C. P.; Zavgorodni, S. F.; Beckham, W. A.

    2005-07-01

    Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 × 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 × 1013 ± 1.0% electrons incident on the target and a total dose of 20.87 cGy ± 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.

  2. Absolute dose calculations for Monte Carlo simulations of radiotherapy beams.

    PubMed

    Popescu, I A; Shaw, C P; Zavgorodni, S F; Beckham, W A

    2005-07-21

    Monte Carlo (MC) simulations have traditionally been used for single field relative comparisons with experimental data or commercial treatment planning systems (TPS). However, clinical treatment plans commonly involve more than one field. Since the contribution of each field must be accurately quantified, multiple field MC simulations are only possible by employing absolute dosimetry. Therefore, we have developed a rigorous calibration method that allows the incorporation of monitor units (MU) in MC simulations. This absolute dosimetry formalism can be easily implemented by any BEAMnrc/DOSXYZnrc user, and applies to any configuration of open and blocked fields, including intensity-modulated radiation therapy (IMRT) plans. Our approach involves the relationship between the dose scored in the monitor ionization chamber of a radiotherapy linear accelerator (linac), the number of initial particles incident on the target, and the field size. We found that for a 10 x 10 cm2 field of a 6 MV photon beam, 1 MU corresponds, in our model, to 8.129 x 10(13) +/- 1.0% electrons incident on the target and a total dose of 20.87 cGy +/- 1.0% in the monitor chambers of the virtual linac. We present an extensive experimental verification of our MC results for open and intensity-modulated fields, including a dynamic 7-field IMRT plan simulated on the CT data sets of a cylindrical phantom and of a Rando anthropomorphic phantom, which were validated by measurements using ionization chambers and thermoluminescent dosimeters (TLD). Our simulation results are in excellent agreement with experiment, with percentage differences of less than 2%, in general, demonstrating the accuracy of our Monte Carlo absolute dose calculations.

  3. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  4. Estimating the absolute wealth of households

    PubMed Central

    Gerkey, Drew; Hadley, Craig

    2015-01-01

    Abstract Objective To estimate the absolute wealth of households using data from demographic and health surveys. Methods We developed a new metric, the absolute wealth estimate, based on the rank of each surveyed household according to its material assets and the assumed shape of the distribution of wealth among surveyed households. Using data from 156 demographic and health surveys in 66 countries, we calculated absolute wealth estimates for households. We validated the method by comparing the proportion of households defined as poor using our estimates with published World Bank poverty headcounts. We also compared the accuracy of absolute versus relative wealth estimates for the prediction of anthropometric measures. Findings The median absolute wealth estimates of 1 403 186 households were 2056 international dollars per capita (interquartile range: 723–6103). The proportion of poor households based on absolute wealth estimates were strongly correlated with World Bank estimates of populations living on less than 2.00 United States dollars per capita per day (R2 = 0.84). Absolute wealth estimates were better predictors of anthropometric measures than relative wealth indexes. Conclusion Absolute wealth estimates provide new opportunities for comparative research to assess the effects of economic resources on health and human capital, as well as the long-term health consequences of economic change and inequality. PMID:26170506

  5. Test Plan for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis; McCorkel, Joel; Hair, Jason; McAndrew, Brendan; Daw, Adrian; Jennings, Donald; Rabin, Douglas

    2012-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. One of the major objectives of CLARREO is to advance the accuracy of SI traceable absolute calibration at infrared and reflected solar wavelengths. This advance is required to reach the on-orbit absolute accuracy required to allow climate change observations to survive data gaps while remaining sufficiently accurate to observe climate change to within the uncertainty of the limit of natural variability. While these capabilities exist at NIST in the laboratory, there is a need to demonstrate that it can move successfully from NIST to NASA and/or instrument vendor capabilities for future spaceborne instruments. The current work describes the test plan for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches , alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result of efforts with the SOLARIS CDS will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections. The CLARREO mission addresses the need to observe high-accuracy, long-term climate change trends and advance the accuracy of SI traceable absolute calibration. The current work describes the test plan for the SOLARIS which is the calibration demonstration

  6. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    PubMed

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Direct and indirect selection of visceral lipid weight, fillet weight, and fillet percentage in a rainbow trout breeding program.

    PubMed

    Kause, A; Paananen, T; Ritola, O; Koskinen, H

    2007-12-01

    decrease in visceral weight rather than of a major increase in absolute fillet weight. Moreover, fillet percentage is challenging to improve, even if it exhibits moderate heritability (h(2) = 0.29). This is because fillet percentage displays low phenotypic variation. In conclusion, fillet weight and fillet percentage can be increased by indirect selection against visceral percentage and for high eviscerated BW.

  8. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  9. 20 CFR 404.1205 - Absolute coverage groups.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Absolute coverage groups. 404.1205 Section... Covered § 404.1205 Absolute coverage groups. (a) General. An absolute coverage group is a permanent... are not under a retirement system. An absolute coverage group may include positions which were...

  10. Conditions for the optical wireless links bit error ratio determination

    NASA Astrophysics Data System (ADS)

    Kvíčala, Radek

    2017-11-01

    To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.

  11. Effects of the liver volume and donor steatosis on errors in the estimated standard liver volume.

    PubMed

    Siriwardana, Rohan Chaminda; Chan, See Ching; Chok, Kenneth Siu Ho; Lo, Chung Mau; Fan, Sheung Tat

    2011-12-01

    An accurate assessment of donor and recipient liver volumes is essential in living donor liver transplantation. Many liver donors are affected by mild to moderate steatosis, and steatotic livers are known to have larger volumes. This study analyzes errors in liver volume estimation by commonly used formulas and the effects of donor steatosis on these errors. Three hundred twenty-five Asian donors who underwent right lobe donor hepatectomy were the subjects of this study. The percentage differences between the liver volumes from computed tomography (CT) and the liver volumes estimated with each formula (ie, the error percentages) were calculated. Five popular formulas were tested. The degrees of steatosis were categorized as follows: no steatosis [n = 178 (54.8%)], ≤ 10% steatosis [n = 128 (39.4%)], and >10% to 20% steatosis [n = 19 (5.8%)]. The median errors ranged from 0.6% (7 mL) to 24.6% (360 mL). The lowest was seen with the locally derived formula. All the formulas showed a significant association between the error percentage and the CT liver volume (P < 0.001). Overestimation was seen with smaller liver volumes, whereas underestimation was seen with larger volumes. The locally derived formula was most accurate when the liver volume was 1001 to 1250 mL. A multivariate analysis showed that the estimation error was dependent on the liver volume (P = 0.001) and the anthropometric measurement that was used in the calculation (P < 0.001) rather than steatosis (P ≥ 0.07). In conclusion, all the formulas have a similar pattern of error that is possibly related to the anthropometric measurement. Clinicians should be aware of this pattern of error and the liver volume with which their formula is most accurate. Copyright © 2011 American Association for the Study of Liver Diseases.

  12. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    PubMed Central

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project. PMID:23112619

  13. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    PubMed

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  14. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    PubMed

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Application of a soft computing technique in predicting the percentage of shear force carried by walls in a rectangular channel with non-homogeneous roughness.

    PubMed

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2016-01-01

    Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.

  16. The importance of intra-hospital pharmacovigilance in the detection of medication errors

    PubMed

    Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio

    2018-01-01

    Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.

  17. Doctors' confusion over ratios and percentages in drug solutions: the case for standard labelling

    PubMed Central

    Wheeler, Daniel Wren; Remoundos, Dionysios Dennis; Whittlestone, Kim David; Palmer, Michael Ian; Wheeler, Sarah Jane; Ringrose, Timothy Richard; Menon, David Krishna

    2004-01-01

    The different ways of expressing concentrations of drugs in solution, as ratios or percentages or mass per unit volume, are a potential cause of confusion that may contribute to dose errors. To assess doctors' understanding of what they signify, all active subscribers to doctors.net.uk, an online community exclusively for UK doctors, were invited to complete a brief web-based multiple-choice questionnaire that explored their familiarity with solutions of adrenaline (expressed as a ratio), lidocaine (expressed as a percentage) and atropine (expressed in mg per mL), and their ability to calculate the correct volume to administer in clinical scenarios relevant to all specialties. 2974 (24.6%) replied. The mean score achieved was 4.80 out of 6 (SD 1.38). Only 85.2% and 65.8% correctly identified the mass of drug in the adrenaline and lidocaine solutions, respectively, whilst 93.1% identified the correct concentration of atropine. More would have administered the correct volume of adrenaline and lidocaine in clinical scenarios (89.4% and 81.0%, respectively) but only 65.5% identified the correct volume of atropine. The labelling of drug solutions as ratios or percentages is antiquated and confusing. Labelling should be standardized to mass per unit volume. PMID:15286190

  18. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  19. A rack-mounted precision waveguide-below-cutoff attenuator with an absolute electronic readout

    NASA Technical Reports Server (NTRS)

    Cook, C. C.

    1974-01-01

    A coaxial precision waveguide-below-cutoff attenuator is described which uses an absolute (unambiguous) electronic digital readout of displacement in inches in addition to the usual gear driven mechanical counter-dial readout in decibels. The attenuator is rack-mountable and has the input and output RF connectors in a fixed position. The attenuation rate for 55, 50, and 30 MHz operation is given along with a discussion of sources of errors. In addition, information is included to aid the user in making adjustments on the attenuator should it be damaged or disassembled for any reason.

  20. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  1. Percentage Energy from Fat Screener: Overview

    Cancer.gov

    A short assessment instrument to estimate an individual's usual intake of percentage energy from fat. The foods asked about on the instrument were selected because they were the most important predictors of variability in percentage energy.

  2. Absolute plate motions relative to deep mantle plumes

    NASA Astrophysics Data System (ADS)

    Wang, Shimin; Yu, Hongzheng; Zhang, Qiong; Zhao, Yonghong

    2018-05-01

    Advances in whole waveform seismic tomography have revealed the presence of broad mantle plumes rooted at the base of the Earth's mantle beneath major hotspots. Hotspot tracks associated with these deep mantle plumes provide ideal constraints for inverting absolute plate motions as well as testing the fixed hotspot hypothesis. In this paper, 27 observed hotspot trends associated with 24 deep mantle plumes are used together with the MORVEL model for relative plate motions to determine an absolute plate motion model, in terms of a maximum likelihood optimization for angular data fitting, combined with an outlier data detection procedure based on statistical tests. The obtained T25M model fits 25 observed trends of globally distributed hotspot tracks to the statistically required level, while the other two hotspot trend data (Comores on Somalia and Iceland on Eurasia) are identified as outliers, which are significantly incompatible with other data. For most hotspots with rate data available, T25M predicts plate velocities significantly lower than the observed rates of hotspot volcanic migration, which cannot be fully explained by biased errors in observed rate data. Instead, the apparent hotspot motions derived by subtracting the observed hotspot migration velocities from the T25M plate velocities exhibit a combined pattern of being opposite to plate velocities and moving towards mid-ocean ridges. The newly estimated net rotation of the lithosphere is statistically compatible with three recent estimates, but differs significantly from 30 of 33 prior estimates.

  3. A new method to calibrate the absolute sensitivity of a soft X-ray streak camera

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Liu, Shenye; Li, Jin; Yang, Zhiwen; Chen, Ming; Guo, Luting; Yao, Li; Xiao, Shali

    2016-12-01

    In this paper, we introduce a new method to calibrate the absolute sensitivity of a soft X-ray streak camera (SXRSC). The calibrations are done in the static mode by using a small laser-produced X-ray source. A calibrated X-ray CCD is used as a secondary standard detector to monitor the X-ray source intensity. In addition, two sets of holographic flat-field grating spectrometers are chosen as the spectral discrimination systems of the SXRSC and the X-ray CCD. The absolute sensitivity of the SXRSC is obtained by comparing the signal counts of the SXRSC to the output counts of the X-ray CCD. Results show that the calibrated spectrum covers the range from 200 eV to 1040 eV. The change of the absolute sensitivity in the vicinity of the K-edge of the carbon can also be clearly seen. The experimental values agree with the calculated values to within 29% error. Compared with previous calibration methods, the proposed method has several advantages: a wide spectral range, high accuracy, and simple data processing. Our calibration results can be used to make quantitative X-ray flux measurements in laser fusion research.

  4. Modeling and forecasting of KLCI weekly return using WT-ANN integrated model

    NASA Astrophysics Data System (ADS)

    Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi

    2013-04-01

    The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.

  5. [Application of wavelet neural networks model to forecast incidence of syphilis].

    PubMed

    Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song

    2011-07-01

    To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.

  6. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  7. The effect of insulin resistance and exercise on the percentage of CD16(+) monocyte subset in obese individuals.

    PubMed

    de Matos, Mariana A; Duarte, Tamiris C; Ottone, Vinícius de O; Sampaio, Pâmela F da M; Costa, Karine B; de Oliveira, Marcos F Andrade; Moseley, Pope L; Schneider, Suzanne M; Coimbra, Cândido C; Brito-Melo, Gustavo E A; Magalhães, Flávio de C; Amorim, Fabiano T; Rocha-Vieira, Etel

    2016-06-01

    Obesity is a low-grade chronic inflammation condition, and macrophages, and possibly monocytes, are involved in the pathological outcomes of obesity. Physical exercise is a low-cost strategy to prevent and treat obesity, probably because of its anti-inflammatory action. We evaluated the percentage of CD16(-) and CD16(+) monocyte subsets in obese insulin-resistant individuals and the effect of an exercise bout on the percentage of these cells. Twenty-seven volunteers were divided into three experimental groups: lean insulin sensitive, obese insulin sensitive and obese insulin resistant. Venous blood samples collected before and 1 h after an aerobic exercise session on a cycle ergometer were used for determination of monocyte subsets by flow cytometry. Insulin-resistant obese individuals have a higher percentage of CD16(+) monocytes (14.8 ± 2.4%) than the lean group (10.0 ± 1.3%). A positive correlation of the percentage of CD16(+) monocytes with body mass index and fasting plasma insulin levels was found. One bout of moderate exercise reduced the percentage of CD16(+) monocytes by 10% in all the groups evaluated. Also, the absolute monocyte count, as well as all other leukocyte populations, in lean and obese individuals, increased after exercise. This fact may partially account for the observed reduction in the percentage of CD16(+) cells in response to exercise. Insulin-resistant, but not insulin-sensitive obese individuals, have an increased percentage of CD16(+) monocytes that can be slightly modulated by a single bout of moderate aerobic exercise. These findings may be clinically relevant to the population studied, considering the involvement of CD16(+) monocytes in the pathophysiology of obesity. Copyright © 2016 John Wiley & Sons, Ltd. Obesity is now considered to be an inflammatory condition associated with many pathological consequences, including insulin resistance. It is proposed that insulin resistance contributes to the aggravation of the

  8. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  9. An educational and audit tool to reduce prescribing error in intensive care.

    PubMed

    Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D

    2008-10-01

    To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.

  10. Piezocomposite Actuator Arrays for Correcting and Controlling Wavefront Error in Reflectors

    NASA Technical Reports Server (NTRS)

    Bradford, Samuel Case; Peterson, Lee D.; Ohara, Catherine M.; Shi, Fang; Agnes, Greg S.; Hoffman, Samuel M.; Wilkie, William Keats

    2012-01-01

    Three reflectors have been developed and tested to assess the performance of a distributed network of piezocomposite actuators for correcting thermal deformations and total wave-front error. The primary testbed article is an active composite reflector, composed of a spherically curved panel with a graphite face sheet and aluminum honeycomb core composite, and then augmented with a network of 90 distributed piezoelectric composite actuators. The piezoelectric actuator system may be used for correcting as-built residual shape errors, and for controlling low-order, thermally-induced quasi-static distortions of the panel. In this study, thermally-induced surface deformations of 1 to 5 microns were deliberately introduced onto the reflector, then measured using a speckle holography interferometer system. The reflector surface figure was subsequently corrected to a tolerance of 50 nm using the actuators embedded in the reflector's back face sheet. Two additional test articles were constructed: a borosilicate at window at 150 mm diameter with 18 actuators bonded to the back surface; and a direct metal laser sintered reflector with spherical curvature, 230 mm diameter, and 12 actuators bonded to the back surface. In the case of the glass reflector, absolute measurements were performed with an interferometer and the absolute surface was corrected. These test articles were evaluated to determine their absolute surface control capabilities, as well as to assess a multiphysics modeling effort developed under this program for the prediction of active reflector response. This paper will describe the design, construction, and testing of active reflector systems under thermal loads, and subsequent correction of surface shape via distributed peizeoelctric actuation.

  11. Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.

    2017-08-01

    Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.

  12. Head repositioning accuracy to neutral: a comparative study of error calculation.

    PubMed

    Hill, Robert; Jensen, Pål; Baardsen, Tor; Kulvik, Kristian; Jull, Gwendolen; Treleaven, Julia

    2009-02-01

    Deficits in cervical proprioception have been identified in subjects with neck pain through the measure of head repositioning accuracy (HRA). Nevertheless there appears to be no general consensus regarding the construct of measurement of error used for calculating HRA. This study investigated four different mathematical methods of measurement of error to determine if there were any differences in their ability to discriminate between a control group and subjects with a whiplash associated disorder. The four methods for measuring cervical joint position error were calculated using a previous data set consisting of 50 subjects with whiplash complaining of dizziness (WAD D), 50 subjects with whiplash not complaining of dizziness (WAD ND) and 50 control subjects. The results indicated that no one measure of HRA uniquely detected or defined the differences between the whiplash and control groups. Constant error (CE) was significantly different between the whiplash and control groups from extension (p<0.05). Absolute errors (AEs) and root mean square errors (RMSEs) demonstrated differences between the two WAD groups in rotation trials (p<0.05). No differences were seen with variable error (VE). The results suggest that a combination of AE (or RMSE) and CE are probably the most suitable measures for analysis of HRA.

  13. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  14. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    NASA Astrophysics Data System (ADS)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  15. Clinical implementation and error sensitivity of a 3D quality assurance protocol for prostate and thoracic IMRT

    PubMed Central

    Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed

    2015-01-01

    This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299

  16. Absolute brightness temperature measurements at 3.5-mm wavelength. [of sun, Venus, Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.

    1980-01-01

    Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.

  17. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    PubMed Central

    Bai, Yang; Lu, Yunfeng; Hu, Pengcheng; Wang, Gang; Xu, Jinxin; Zeng, Tao; Li, Zhengkun; Zhang, Zhonghua; Tan, Jiubin

    2016-01-01

    A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF) movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range. PMID:27187393

  18. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    NASA Astrophysics Data System (ADS)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  19. Making Sense of Fractions and Percentages

    ERIC Educational Resources Information Center

    Whitin, David J.; Whitin, Phyllis

    2012-01-01

    Because fractions and percentages can be difficult for children to grasp, connecting them whenever possible is beneficial. Linking them can foster representational fluency as children simultaneously see the part-whole relationship expressed numerically (as a fraction and as a percentage) and visually (as a pie chart). NCTM advocates these…

  20. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  1. Error Estimation of Pathfinder Version 5.3 SST Level 3C Using Three-way Error Analysis

    NASA Astrophysics Data System (ADS)

    Saha, K.; Dash, P.; Zhao, X.; Zhang, H. M.

    2017-12-01

    One of the essential climate variables for monitoring as well as detecting and attributing climate change, is Sea Surface Temperature (SST). A long-term record of global SSTs are available with observations obtained from ships in the early days to the more modern observation based on in-situ as well as space-based sensors (satellite/aircraft). There are inaccuracies associated with satellite derived SSTs which can be attributed to the errors associated with spacecraft navigation, sensor calibrations, sensor noise, retrieval algorithms, and leakages due to residual clouds. Thus it is important to estimate accurate errors in satellite derived SST products to have desired results in its applications.Generally for validation purposes satellite derived SST products are compared against the in-situ SSTs which have inaccuracies due to spatio/temporal inhomogeneity between in-situ and satellite measurements. A standard deviation in their difference fields usually have contributions from both satellite as well as the in-situ measurements. A real validation of any geophysical variable must require the knowledge of the "true" value of the said variable. Therefore a one-to-one comparison of satellite based SST with in-situ data does not truly provide us the real error in the satellite SST and there will be ambiguity due to errors in the in-situ measurements and their collocation differences. A Triple collocation (TC) or three-way error analysis using 3 mutually independent error-prone measurements, can be used to estimate root-mean square error (RMSE) associated with each of the measurements with high level of accuracy without treating any one system a perfectly-observed "truth". In this study we are estimating the absolute random errors associated with Pathfinder Version 5.3 Level-3C SST product Climate Data record. Along with the in-situ SST data, the third source of dataset used for this analysis is the AATSR reprocessing of climate (ARC) dataset for the corresponding

  2. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  3. Effect of limbal marking prior to laser ablation on the magnitude of cyclotorsional error.

    PubMed

    Chen, Xiangjun; Stojanovic, Aleksandar; Stojanovic, Filip; Eidet, Jon Roger; Raeder, Sten; Øritsland, Haakon; Utheim, Tor Paaske

    2012-05-01

    To evaluate the residual registration error after limbal-marking-based manual adjustment in cyclotorsional tracker-controlled laser refractive surgery. Two hundred eyes undergoing custom surface ablation with the iVIS Suite (iVIS Technologies) were divided into limbal marked (marked) and non-limbal marked (unmarked) groups. Iris registration information was acquired preoperatively from all eyes. Preoperatively, the horizontal axis was recorded in the marked group for use in manual cyclotorsional alignment prior to surgical iris registration. During iris registration, the preoperative iris information was compared to the eye-tracker captured image. The magnitudes of the registration error angle and cyclotorsional movement during the subsequent laser ablation were recorded and analyzed. Mean magnitude of registration error angle (absolute value) was 1.82°±1.31° (range: 0.00° to 5.50°) and 2.90°±2.40° (range: 0.00° to 13.50°) for the marked and unmarked groups, respectively (P<.001). Mean magnitude of cyclotorsional movement during the laser ablation (absolute value) was 1.15°±1.34° (range: 0.00° to 7.00°) and 0.68°±0.97° (range: 0.00° to 6.00°) for the marked and unmarked groups, respectively (P=.005). Forty-six percent and 60% of eyes had registration error >2°, whereas 22% and 20% of eyes had cyclotorsional movement during ablation >2° in the marked and unmarked groups, respectively. Limbal-marking-based manual alignment prior to laser ablation significantly reduced cyclotorsional registration error. However, residual registration misalignment and cyclotorsional movements remained during ablation. Copyright 2012, SLACK Incorporated.

  4. 20 CFR 404.1205 - Absolute coverage groups.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Absolute coverage groups. 404.1205 Section... INSURANCE (1950- ) Coverage of Employees of State and Local Governments What Groups of Employees May Be Covered § 404.1205 Absolute coverage groups. (a) General. An absolute coverage group is a permanent...

  5. 20 CFR 404.1205 - Absolute coverage groups.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Absolute coverage groups. 404.1205 Section... INSURANCE (1950- ) Coverage of Employees of State and Local Governments What Groups of Employees May Be Covered § 404.1205 Absolute coverage groups. (a) General. An absolute coverage group is a permanent...

  6. 20 CFR 404.1205 - Absolute coverage groups.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Absolute coverage groups. 404.1205 Section... INSURANCE (1950- ) Coverage of Employees of State and Local Governments What Groups of Employees May Be Covered § 404.1205 Absolute coverage groups. (a) General. An absolute coverage group is a permanent...

  7. LAI inversion from optical reflectance using a neural network trained with a multiple scattering model

    NASA Technical Reports Server (NTRS)

    Smith, James A.

    1992-01-01

    The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.

  8. Measurement-based analysis of error latency. [in computer operating system

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  9. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors.

    PubMed

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-07-01

    Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. 9 head and neck (H&N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (+/- 1 mm in two banks, +/- 0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H&N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  10. Computerized pharmaceutical intervention to reduce reconciliation errors at hospital discharge in Spain: an interrupted time-series study.

    PubMed

    García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D

    2016-04-01

    It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P

  11. First identification of O,S-diethyl Thiocarbonate in Indian Cress absolute and odor evaluation of its synthesized homologues by GC-sniffing.

    PubMed

    Breme, Katharina; Guillamon, Nadine; Fernandez, Xavier; Tournayre, Pascal; Brevard, Hugues; Joulain, Daniel; Berdagué, Jean Louis; Meierhenrich, Uwe J

    2009-03-25

    Indian cress (Tropaeolum majus L.) absolute was studied by GC-olfactometry (VIDEO-Sniff method) in order to identify odor-active aroma compounds. Because of its fruity-sulfury odor note, a compound that has never been identified in plant extracts before stood out: O,S-diethyl thiocarbonate, present at 0.1% (percentage of the total GC/FID area) in the extract. GCxGC-TOFMS allowed for a clean mass spectrum to be obtained, and isolation by preparative GC followed by NMR studies allowed its identification. Here, we report on the first detection of O,S-diethyl thiocarbonate in Indian cress absolute by GC-olfactometry/VIDEO-Sniff and on its isolation and identification. The synthesis and odor evaluation of its homologues are presented.

  12. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  13. Relative Proportion Of Different Types Of Refractive Errors In Subjects Seeking Laser Vision Correction.

    PubMed

    Althomali, Talal A

    2018-01-01

    Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes.

  14. Error and objectivity: cognitive illusions and qualitative research.

    PubMed

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  15. The approach of Bayesian model indicates media awareness of medical errors

    NASA Astrophysics Data System (ADS)

    Ravichandran, K.; Arulchelvan, S.

    2016-06-01

    This research study brings out the factors behind the increase in medical malpractices in the Indian subcontinent in the present day environment and impacts of television media awareness towards it. Increased media reporting of medical malpractices and errors lead to hospitals taking corrective action and improve the quality of medical services that they provide. The model of Cultivation Theory can be used to measure the influence of media in creating awareness of medical errors. The patient's perceptions of various errors rendered by the medical industry from different parts of India were taken up for this study. Bayesian method was used for data analysis and it gives absolute values to indicate satisfaction of the recommended values. To find out the impact of maintaining medical records of a family online by the family doctor in reducing medical malpractices which creates the importance of service quality in medical industry through the ICT.

  16. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  17. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  18. Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas

    NASA Astrophysics Data System (ADS)

    Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.

    2017-12-01

    Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.

  19. Self-attraction effect and correction on the T-1 absolute gravimeter

    NASA Astrophysics Data System (ADS)

    Li, Z.; Hu, H.; Wu, K.; Li, G.; Wang, G.; Wang, L. J.

    2015-12-01

    The self-attraction effect (SAE) in an absolute gravimeter is a kind of systematic error due to the gravitation of the instrument to the falling object. This effect depends on the mass distribution of the gravimeter, and is estimated to be a few microgals (1 μGal  =  10-8 m s-2) for the FG5 gravimeter. In this paper, the SAE of a home-made T-1 absolute gravimeter is analyzed and calculated. Most of the stationary components, including the dropping chamber, the laser interferometer, the vibration isolation device and two tripods, are finely modelled, and the related SAEs are computed. In addition, the SAE of the co-falling carriage inside the dropping chamber is carefully calculated because the distance between the falling object and the co-falling carriage varies during the measurement. In order to get the correction of the SAE, two different methods are compared. One is to linearize the SAE curve, the other one is to calculate the perturbed trajectory. The results from these two methods agree with each other within 0.01 μGal. With an uncertainty analysis, the correction of the SAE of the T-1 gravimeter is estimated to be (-1.9  ±  0.1) μGal.

  20. Absolute and relative emissions analysis in practical combustion systems—effect of water vapor condensation

    NASA Astrophysics Data System (ADS)

    Richter, J. P.; Mollendorf, J. C.; DesJardin, P. E.

    2016-11-01

    Accurate knowledge of the absolute combustion gas composition is necessary in the automotive, aircraft, processing, heating and air conditioning industries where emissions reduction is a major concern. Those industries use a variety of sensor technologies. Many of these sensors are used to analyze the gas by pumping a sample through a system of tubes to reach a remote sensor location. An inherent characteristic with this type of sampling strategy is that the mixture state changes as the sample is drawn towards the sensor. Specifically, temperature and humidity changes can be significant, resulting in a very different gas mixture at the sensor interface compared with the in situ location (water vapor dilution effect). Consequently, the gas concentrations obtained from remotely sampled gas analyzers can be significantly different than in situ values. In this study, inherent errors associated with sampled combustion gas concentration measurements are explored, and a correction methodology is presented to determine the absolute gas composition from remotely measured gas species concentrations. For in situ (wet) measurements a heated zirconium dioxide (ZrO2) oxygen sensor (Bosch LSU 4.9) is used to measure the absolute oxygen concentration. This is used to correct the remotely sampled (dry) measurements taken with an electrochemical sensor within the remote analyzer (Testo 330-2LL). In this study, such a correction is experimentally validated for a specified concentration of carbon monoxide (5020 ppmv).

  1. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    PubMed

    Najat, Dereen

    2017-01-01

    Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani

  2. Method for the fabrication error calibration of the CGH used in the cylindrical interferometry system

    NASA Astrophysics Data System (ADS)

    Wang, Qingquan; Yu, Yingjie; Mou, Kebing

    2016-10-01

    This paper presents a method of absolutely calibrating the fabrication error of the CGH in the cylindrical interferometry system for the measurement of cylindricity error. First, a simulated experimental system is set up in ZEMAX. On one hand, the simulated experimental system has demonstrated the feasibility of the method we proposed. On the other hand, by changing the different positions of the mirror in the simulated experimental system, a misalignment aberration map, consisting of the different interferograms in different positions, is acquired. And it can be acted as a reference for the experimental adjustment in real system. Second, the mathematical polynomial, which describes the relationship between the misalignment aberrations and the possible misalignment errors, is discussed.

  3. The Surveillance Error Grid

    PubMed Central

    Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris

    2014-01-01

    plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886

  4. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.

  5. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions

  6. Absolute Summ

    NASA Astrophysics Data System (ADS)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  7. Investigation of writing error in staggered heated-dot magnetic recording systems

    NASA Astrophysics Data System (ADS)

    Tipcharoen, W.; Warisarn, C.; Tongsomporn, D.; Karns, D.; Kovintavewat, P.

    2017-05-01

    To achieve an ultra-high storage capacity, heated-dot magnetic recording (HDMR) has been proposed, which heats a bit-patterned medium before recording data. Generally, an error during the HDMR writing process comes from several sources; however, we only investigate the effects of staggered island arrangement, island size fluctuation caused by imperfect fabrication, and main pole position fluctuation. Simulation results demonstrate that a writing error can be minimized by using a staggered array (hexagonal lattice) instead of a square array. Under the effect of main pole position fluctuation, the writing error is higher than the system without main pole position fluctuation. Finally, we found that the error percentage can drop below 10% when the island size is 8.5 nm and the standard deviation of the island size is 1 nm in the absence of main pole jitter.

  8. Variable percentage sampler

    DOEpatents

    Miller, Jr., William H.

    1976-01-01

    A remotely operable sampler is provided for obtaining variable percentage samples of nuclear fuel particles and the like for analyses. The sampler has a rotating cup for a sample collection chamber designed so that the effective size of the sample inlet opening to the cup varies with rotational speed. Samples of a desired size are withdrawn from a flowing stream of particles without a deterrent to the flow of remaining particles.

  9. Dynamic diagnostics of the error fields in tokamaks

    NASA Astrophysics Data System (ADS)

    Pustovitov, V. D.

    2007-07-01

    The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.

  10. Absolute optical metrology : nanometers to kilometers

    NASA Technical Reports Server (NTRS)

    Dubovitsky, Serge; Lay, O. P.; Peters, R. D.; Liebe, C. C.

    2005-01-01

    We provide and overview of the developments in the field of high-accuracy absolute optical metrology with emphasis on space-based applications. Specific work on the Modulation Sideband Technology for Absolute Ranging (MSTAR) sensor is described along with novel applications of the sensor.

  11. The Relationship among Correct and Error Oral Reading Rates and Comprehension.

    ERIC Educational Resources Information Center

    Roberts, Michael; Smith, Deborah Deutsch

    1980-01-01

    Eight learning disabled boys (10 to 12 years old) who were seriously deficient in both their oral reading and comprehension performances participated in the study which investigated, through an applied behavior analysis model, the interrelationships of three reading variables--correct oral reading rates, error oral reading rates, and percentage of…

  12. Form and Objective of the Decision Rule in Absolute Identification

    NASA Technical Reports Server (NTRS)

    Balakrishnan, J. D.

    1997-01-01

    In several conditions of a line length identification experiment, the subjects' decision making strategies were systematically biased against the responses on the edges of the stimulus range. When the range and number of the stimuli were small, the bias caused the percentage of correct responses to be highest in the center and lowest on the extremes of the range. Two general classes of decision rules that would explain these results are considered. The first class assumes that subjects intend to adopt an optimal decision rule, but systematically misrepresent one or more parameters of the decision making context. The second class assumes that subjects use a different measure of performance than the one assumed by the experimenter: instead of maximizing the chances of a correct response, the subject attempts to minimize the expected size of the response error (a "fidelity criterion"). In a second experiment, extended experience and feedback did not diminish the bias effect, but explicitly penalizing all response errors equally, regardless of their size, did reduce or eliminate it in some subjects. Both results favor the fidelity criterion over the optimal rule.

  13. A global algorithm for estimating Absolute Salinity

    NASA Astrophysics Data System (ADS)

    McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.

    2012-12-01

    The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).

  14. Relative Proportion Of Different Types Of Refractive Errors In Subjects Seeking Laser Vision Correction

    PubMed Central

    Althomali, Talal A.

    2018-01-01

    Background: Refractive errors are a form of optical defect affecting more than 2.3 billion people worldwide. As refractive errors are a major contributor of mild to moderate vision impairment, assessment of their relative proportion would be helpful in the strategic planning of health programs. Purpose: To determine the pattern of the relative proportion of types of refractive errors among the adult candidates seeking laser assisted refractive correction in a private clinic setting in Saudi Arabia. Methods: The clinical charts of 687 patients (1374 eyes) with mean age 27.6 ± 7.5 years who desired laser vision correction and underwent a pre-LASIK work-up were reviewed retrospectively. Refractive errors were classified as myopia, hyperopia and astigmatism. Manifest refraction spherical equivalent (MRSE) was applied to define refractive errors. Outcome Measures: Distribution percentage of different types of refractive errors; myopia, hyperopia and astigmatism. Results: The mean spherical equivalent for 1374 eyes was -3.11 ± 2.88 D. Of the total 1374 eyes, 91.8% (n = 1262) eyes had myopia, 4.7% (n = 65) eyes had hyperopia and 3.4% (n = 47) had emmetropia with astigmatism. Distribution percentage of astigmatism (cylinder error of ≥ 0.50 D) was 78.5% (1078/1374 eyes); of which % 69.1% (994/1374) had low to moderate astigmatism and 9.4% (129/1374) had high astigmatism. Conclusion and Relevance: Of the adult candidates seeking laser refractive correction in a private setting in Saudi Arabia, myopia represented greatest burden with more than 90% myopic eyes, compared to hyperopia in nearly 5% eyes. Astigmatism was present in more than 78% eyes. PMID:29872484

  15. 12 CFR 226.22 - Determination of annual percentage rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... annual percentage rate shall be considered accurate if it is not more than 1/8 of 1 percentage point... more than 1/4 of 1 percentage point above or below the annual percentage rate determined in accordance... transaction. (d) Certain transactions involving ranges of balances. For purposes of disclosing the annual...

  16. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    PubMed

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  17. Absolute instability of the Gaussian wake profile

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Aggarwal, Arun K.

    1987-01-01

    Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.

  18. 49 CFR 236.709 - Block, absolute.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Block, absolute. 236.709 Section 236.709 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Block, absolute. A block in which no train is permitted to enter while it is occupied by another train. ...

  19. 49 CFR 236.709 - Block, absolute.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Block, absolute. 236.709 Section 236.709 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Block, absolute. A block in which no train is permitted to enter while it is occupied by another train. ...

  20. Absolute quantification of microbial taxon abundances.

    PubMed

    Props, Ruben; Kerckhof, Frederiek-Maarten; Rubbens, Peter; De Vrieze, Jo; Hernandez Sanabria, Emma; Waegeman, Willem; Monsieurs, Pieter; Hammes, Frederik; Boon, Nico

    2017-02-01

    High-throughput amplicon sequencing has become a well-established approach for microbial community profiling. Correlating shifts in the relative abundances of bacterial taxa with environmental gradients is the goal of many microbiome surveys. As the abundances generated by this technology are semi-quantitative by definition, the observed dynamics may not accurately reflect those of the actual taxon densities. We combined the sequencing approach (16S rRNA gene) with robust single-cell enumeration technologies (flow cytometry) to quantify the absolute taxon abundances. A detailed longitudinal analysis of the absolute abundances resulted in distinct abundance profiles that were less ambiguous and expressed in units that can be directly compared across studies. We further provide evidence that the enrichment of taxa (increase in relative abundance) does not necessarily relate to the outgrowth of taxa (increase in absolute abundance). Our results highlight that both relative and absolute abundances should be considered for a comprehensive biological interpretation of microbiome surveys.

  1. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    PubMed Central

    Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.

    2018-01-01

    Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171

  2. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    PubMed

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  3. Absolute Humidity and the Seasonality of Influenza (Invited)

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Pitzer, V.; Viboud, C.; Grenfell, B.; Goldstein, E.; Lipsitch, M.

    2010-12-01

    Much of the observed wintertime increase of mortality in temperate regions is attributed to seasonal influenza. A recent re-analysis of laboratory experiments indicates that absolute humidity strongly modulates the airborne survival and transmission of the influenza virus. Here we show that the onset of increased wintertime influenza-related mortality in the United States is associated with anomalously low absolute humidity levels during the prior weeks. We then use an epidemiological model, in which observed absolute humidity conditions temper influenza transmission rates, to successfully simulate the seasonal cycle of observed influenza-related mortality. The model results indicate that direct modulation of influenza transmissibility by absolute humidity alone is sufficient to produce this observed seasonality. These findings provide epidemiological support for the hypothesis that absolute humidity drives seasonal variations of influenza transmission in temperate regions. In addition, we show that variations of the basic and effective reproductive numbers for influenza, caused by seasonal changes in absolute humidity, are consistent with the general timing of pandemic influenza outbreaks observed for 2009 A/H1N1 in temperate regions. Indeed, absolute humidity conditions correctly identify the region of the United States vulnerable to a third, wintertime wave of pandemic influenza. These findings suggest that the timing of pandemic influenza outbreaks is controlled by a combination of absolute humidity conditions, levels of susceptibility and changes in population mixing and contact rates.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  5. Improvement of GPS radio occultation retrieval error of E region electron density: COSMIC measurement and IRI model simulation

    NASA Astrophysics Data System (ADS)

    Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang

    2015-03-01

    In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a

  6. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    PubMed

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  7. Economic measurement of medical errors using a hospital claims database.

    PubMed

    David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S

    2013-01-01

    The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Compensating additional optical power in the central zone of a multifocal contact lens forminimization of the shrinkage error of the shell mold in the injection molding process.

    PubMed

    Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei

    2018-04-20

    This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.

  9. Low absolute neutrophil counts in African infants.

    PubMed

    Kourtis, Athena P; Bramson, Brian; van der Horst, Charles; Kazembe, Peter; Ahmed, Yusuf; Chasela, Charles; Hosseinipour, Mina; Knight, Rodney; Lugalia, Lebah; Tegha, Gerald; Joaki, George; Jafali, Robert; Jamieson, Denise J

    2005-07-01

    Infants of African origin have a lower normal range of absolute neutrophil counts than white infants; this fact, however, remains under appreciated by clinical researchers in the United States. During the initial stages of a clinical trial in Malawi, the authors noted an unexpectedly high number of infants with absolute neutrophil counts that would be classifiable as neutropenic using the National Institutes of Health's Division of AIDS toxicity tables. The authors argue that the relevant Division of AIDS table does not take into account the available evidence of low absolute neutrophil counts in African infants and that a systematic collection of data from many African settings might help establish the absolute neutrophil count cutpoints to be used for defining neutropenia in African populations.

  10. Absolute colorimetric characterization of a DSLR camera

    NASA Astrophysics Data System (ADS)

    Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo

    2014-03-01

    A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.

  11. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  12. Use of observed within-person variation of cardiac troponin in emergency department patients for determination of biological variation and percentage and absolute reference change values.

    PubMed

    Simpson, Aaron J; Potter, Julia M; Koerbin, Gus; Oakman, Carmen; Cullen, Louise; Wilkes, Garry J; Scanlan, Samuel L; Parsonage, William; Hickman, Peter E

    2014-06-01

    Many patients presenting to the emergency department (ED) for assessment of possible acute coronary syndrome (ACS) have low cardiac troponin concentrations that change very little on repeat blood draw. It is unclear if a lack of change in cardiac troponin concentration can be used to identify acutely presenting patients at low risk of ACS. We used the hs-cTnI assay from Abbott Diagnostics, which can detect cTnI in the blood of nearly all people. We identified a population of ED patients being assessed for ACS with repeat cTnI measurement who ultimately were proven to have no acute cardiac disease at the time of presentation. We used data from the repeat sampling to calculate total within-person CV (CV(T)) and, knowing the assay analytical CV (CV(A)), we could calculate within-person biological variation (CV(i)), reference change values (RCVs), and absolute RCV delta cTnI concentrations. We had data sets on 283 patients. Men and women had similar CV(i) values of approximately 14%, which was similar at all concentrations <40 ng/L. The biological variation was not dependent on the time interval between sample collections (t = 1.5-17 h). The absolute delta critical reference change value was similar no matter what the initial cTnI concentration was. More than 90% of subjects had a critical reference change value <5 ng/L, and 97% had values of <10 ng/L. With this hs-cTnI assay, delta cTnI seems to be a useful tool for rapidly identifying ED patients at low risk for possible ACS. © 2014 The American Association for Clinical Chemistry.

  13. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models

    PubMed Central

    de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo

    2018-01-01

    Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857

  14. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    PubMed

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  15. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    PubMed

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  16. The stars: an absolute radiometric reference for the on-orbit calibration of PLEIADES-HR satellites

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Blanchet, Gwendoline; Mounier, Flore; Buil, Christian

    2017-09-01

    The accurate on-orbit radiometric calibration of optical sensors has become a challenge for space agencies who gather their effort through international working groups such as CEOS/WGCV or GSICS with the objective to insure the consistency of space measurements and to reach an absolute accuracy compatible with more and more demanding scientific needs. Different targets are traditionally used for calibration depending on the sensor or spacecraft specificities: from on-board calibration systems to ground targets, they all take advantage of our capacity to characterize and model them. But achieving the in-flight stability of a diffuser panel is always a challenge while the calibration over ground targets is often limited by their BDRF characterization and the atmosphere variability. Thanks to their agility, some satellites have the capability to view extra-terrestrial targets such as the moon or stars. The moon is widely used for calibration and its albedo is known through ROLO (RObotic Lunar Observatory) USGS model but with a poor absolute accuracy limiting its use to sensor drift monitoring or cross-calibration. Although the spectral irradiance of some stars is known with a very high accuracy, it was not really shown that they could provide an absolute reference for remote sensors calibration. This paper shows that high resolution optical sensors can be calibrated with a high absolute accuracy using stars. The agile-body PLEIADES 1A satellite is used for this demonstration. The star based calibration principle is described and the results are provided for different stars, each one being acquired several times. These results are compared to the official calibration provided by ground targets and the main error contributors are discussed.

  17. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan

    PubMed Central

    2017-01-01

    Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6

  18. Cryogenic, Absolute, High Pressure Sensor

    NASA Technical Reports Server (NTRS)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  19. Absolute Income, Relative Income, and Happiness

    ERIC Educational Resources Information Center

    Ball, Richard; Chernova, Kateryna

    2008-01-01

    This paper uses data from the World Values Survey to investigate how an individual's self-reported happiness is related to (i) the level of her income in absolute terms, and (ii) the level of her income relative to other people in her country. The main findings are that (i) both absolute and relative income are positively and significantly…

  20. Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise

    NASA Technical Reports Server (NTRS)

    Kvalseth, T. O.

    1977-01-01

    This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.

  1. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  2. Universal Cosmic Absolute and Modern Science

    NASA Astrophysics Data System (ADS)

    Kostro, Ludwik

    The official Sciences, especially all natural sciences, respect in their researches the principle of methodic naturalism i.e. they consider all phenomena as entirely natural and therefore in their scientific explanations they do never adduce or cite supernatural entities and forces. The purpose of this paper is to show that Modern Science has its own self-existent, self-acting, and self-sufficient Natural All-in Being or Omni-Being i.e. the entire Nature as a Whole that justifies the scientific methodic naturalism. Since this Natural All-in Being is one and only It should be considered as the own scientifically justified Natural Absolute of Science and should be called, in my opinion, the Universal Cosmic Absolute of Modern Science. It will be also shown that the Universal Cosmic Absolute is ontologically enormously stratified and is in its ultimate i.e. in its most fundamental stratum trans-reistic and trans-personal. It means that in its basic stratum. It is neither a Thing or a Person although It contains in Itself all things and persons with all other sentient and conscious individuals as well, On the turn of the 20th century the Science has begun to look for a theory of everything, for a final theory, for a master theory. In my opinion the natural Universal Cosmic Absolute will constitute in such a theory the radical all penetrating Ultimate Basic Reality and will substitute step by step the traditional supernatural personal Absolute.

  3. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    PubMed

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  4. Jasminum sambac flower absolutes from India and China--geographic variations.

    PubMed

    Braun, Norbert A; Sim, Sherina

    2012-05-01

    Seven Jasminum sambac flower absolutes from different locations in the southern Indian state of Tamil Nadu were analyzed using GC and GC-MS. Focus was placed on 41 key ingredients to investigate geographic variations in this species. These seven absolutes were compared with an Indian bud absolute and commercially available J. sambac flower absolutes from India and China. All absolutes showed broad variations for the 10 main ingredients between 8% and 96%. In addition, the odor of Indian and Chinese J. sambac flower absolutes were assessed.

  5. Advancing Absolute Calibration for JWST and Other Applications

    NASA Astrophysics Data System (ADS)

    Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin

    2017-10-01

    We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.

  6. Absolute radiometric calibration of advanced remote sensing systems

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1982-01-01

    The distinction between the uses of relative and absolute spectroradiometric calibration of remote sensing systems is discussed. The advantages of detector-based absolute calibration are described, and the categories of relative and absolute system calibrations are listed. The limitations and problems associated with three common methods used for the absolute calibration of remote sensing systems are addressed. Two methods are proposed for the in-flight absolute calibration of advanced multispectral linear array systems. One makes use of a sun-illuminated panel in front of the sensor, the radiance of which is monitored by a spectrally flat pyroelectric radiometer. The other uses a large, uniform, high-radiance reference ground surface. The ground and atmospheric measurements required as input to a radiative transfer program to predict the radiance level at the entrance pupil of the orbital sensor are discussed, and the ground instrumentation is described.

  7. aCNViewer: Comprehensive genome-wide visualization of absolute copy number and copy neutral variations

    PubMed Central

    Wang-Renault, Shu-Fang; Letouzé, Eric; Imbeaud, Sandrine; Zucman-Rossi, Jessica; Deleuze, Jean-François; How-Kit, Alexandre

    2017-01-01

    Motivation Copy number variations (CNV) include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH) events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH) and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information. Results To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer), a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs) Affymetrix SNP Array data (Fig 1A). Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test), validated by another cohort of HCCs (p-value of 5.6e-7) (Fig 2B). Availability and implementation aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https

  8. aCNViewer: Comprehensive genome-wide visualization of absolute copy number and copy neutral variations.

    PubMed

    Renault, Victor; Tost, Jörg; Pichon, Fabien; Wang-Renault, Shu-Fang; Letouzé, Eric; Imbeaud, Sandrine; Zucman-Rossi, Jessica; Deleuze, Jean-François; How-Kit, Alexandre

    2017-01-01

    Copy number variations (CNV) include net gains or losses of part or whole chromosomal regions. They differ from copy neutral loss of heterozygosity (cn-LOH) events which do not induce any net change in the copy number and are often associated with uniparental disomy. These phenomena have long been reported to be associated with diseases and particularly in cancer. Losses/gains of genomic regions are often correlated with lower/higher gene expression. On the other hand, loss of heterozygosity (LOH) and cn-LOH are common events in cancer and may be associated with the loss of a functional tumor suppressor gene. Therefore, identifying recurrent CNV and cn-LOH events can be important as they may highlight common biological components and give insights into the development or mechanisms of a disease. However, no currently available tools allow a comprehensive whole-genome visualization of recurrent CNVs and cn-LOH in groups of samples providing absolute quantification of the aberrations leading to the loss of potentially important information. To overcome these limitations, we developed aCNViewer (Absolute CNV Viewer), a visualization tool for absolute CNVs and cn-LOH across a group of samples. aCNViewer proposes three graphical representations: dendrograms, bi-dimensional heatmaps showing chromosomal regions sharing similar abnormality patterns, and quantitative stacked histograms facilitating the identification of recurrent absolute CNVs and cn-LOH. We illustrated aCNViewer using publically available hepatocellular carcinomas (HCCs) Affymetrix SNP Array data (Fig 1A). Regions 1q and 8q present a similar percentage of total gains but significantly different copy number gain categories (p-value of 0.0103 with a Fisher exact test), validated by another cohort of HCCs (p-value of 5.6e-7) (Fig 2B). aCNViewer is implemented in python and R and is available with a GNU GPLv3 license on GitHub https://github.com/FJD-CEPH/aCNViewer and Docker https

  9. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  10. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    NASA Astrophysics Data System (ADS)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  11. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  13. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ming; Cygler,

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less

  14. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    PubMed

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  15. Validity of near-infrared interactance (FUTREX 6100/XL) for estimating body fat percentage in elite rowers.

    PubMed

    Fukuda, David H; Wray, Mandy E; Kendall, Kristina L; Smith-Ryan, Abbie E; Stout, Jeffrey R

    2017-07-01

    This investigation aimed to compare hydrostatic weighing (HW) with near-infrared interactance (NIR) and skinfold measurements (SKF) in estimating body fat percentage (FAT%) in rowing athletes. FAT% was estimated in 20 elite male rowers (mean ± SD: age = 24·8 ± 2·2 years, height = 191·0 ± 6·8 cm, weight = 86·8 ± 11·3 kg, HW FAT% = 11·50 ± 3·16%) using HW with residual volume, 3-site SKF and NIR on the biceps brachii. Predicted FAT% values for NIR and SKF were validated against the criterion method of HW. Constant error was not significant for NIR (-0·06, P = 0·955) or SKF (-0·20, P = 0·813). Neither NIR (r = 0·045) nor SKF (r = 0·229) demonstrated significant validity coefficients when compared to HW. The standard error of the estimate values for NIR and SKF were both less than 3·5%, while total error was 4·34% and 3·60%, respectively. When compared to HW, SKF and NIR provide similar mean values when compared to HW, but the lack of apparent relationships between individual values and borderline unacceptable total error may limit their application in this population. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  16. 26 CFR 31.3402(b)-1 - Percentage method of withholding.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 15 2012-04-01 2012-04-01 false Percentage method of withholding. 31.3402(b)-1... SOURCE Collection of Income Tax at Source § 31.3402(b)-1 Percentage method of withholding. With respect... percentage method of withholding shall be determined under the applicable percentage method withholding table...

  17. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  18. Microionization chamber for reference dosimetry in IMRT verification: clinical implications on OAR dosimetric errors

    NASA Astrophysics Data System (ADS)

    Sánchez-Doblado, Francisco; Capote, Roberto; Leal, Antonio; Roselló, Joan V.; Lagares, Juan I.; Arráns, Rafael; Hartmann, Günther H.

    2005-03-01

    Intensity modulated radiotherapy (IMRT) has become a treatment of choice in many oncological institutions. Small fields or beamlets with sizes of 1 to 5 cm2 are now routinely used in IMRT delivery. Therefore small ionization chambers (IC) with sensitive volumes <=0.1 cm3are generally used for dose verification of an IMRT treatment. The measurement conditions during verification may be quite different from reference conditions normally encountered in clinical beam calibration, so dosimetry of these narrow photon beams pertains to the so-called non-reference conditions for beam calibration. This work aims at estimating the error made when measuring the organ at risk's (OAR) absolute dose by a micro ion chamber (μIC) in a typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. We have selected two clinical cases, treated by IMRT, for our dose error evaluations. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the dose measured by the chamber as a dose averaged over the air cavity within the ion-chamber active volume (Dair). The absorbed dose to water (Dwater) is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water in the absence of the ion chamber. Therefore, the Dwater/Dair dose ratio is the MC estimator of the total correction factor needed to convert the absorbed dose in air into the absorbed dose in water. The dose ratio was calculated for the μIC located at the isocentre within the OARs for both clinical cases. The clinical impact of the calculated dose error was found to be negligible for the studied IMRT treatments.

  19. Linking Comparisons of Absolute Gravimeters: A Proof of Concept for a new Global Absolute Gravity Reference System.

    NASA Astrophysics Data System (ADS)

    Wziontek, H.; Palinkas, V.; Falk, R.; Vaľko, M.

    2016-12-01

    Since decades, absolute gravimeters are compared on a regular basis on an international level, starting at the International Bureau for Weights and Measures (BIPM) in 1981. Usually, these comparisons are based on constant reference values deduced from all accepted measurements acquired during the comparison period. Temporal changes between comparison epochs are usually not considered. Resolution No. 2, adopted by IAG during the IUGG General Assembly in Prague 2015, initiates the establishment of a Global Absolute Gravity Reference System based on key comparisons of absolute gravimeters (AG) under the International Committee for Weights and Measures (CIPM) in order to establish a common level in the microGal range. A stable and unique reference frame can only be achieved, if different AG are taking part in different kind of comparisons. Systematic deviations between the respective comparison reference values can be detected, if the AG can be considered stable over time. The continuous operation of superconducting gravimeters (SG) on selected stations further supports the temporal link of comparison reference values by establishing a reference function over time. By a homogenous reprocessing of different comparison epochs and including AG and SG time series at selected stations, links between several comparisons will be established and temporal comparison reference functions will be derived. By this, comparisons on a regional level can be traced to back to the level of key comparisons, providing a reference for other absolute gravimeters. It will be proved and discussed, how such a concept can be used to support the future absolute gravity reference system.

  20. 27 CFR 5.40 - Statements of age and percentage.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Statements of age and... Distilled Spirits § 5.40 Statements of age and percentage. (a) Statements of age and percentage for whisky... more, statements of age and percentage are optional. As to all other whiskies there shall be stated the...

  1. 27 CFR 5.40 - Statements of age and percentage.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Statements of age and... Distilled Spirits § 5.40 Statements of age and percentage. (a) Statements of age and percentage for whisky... more, statements of age and percentage are optional. As to all other whiskies there shall be stated the...

  2. 27 CFR 5.40 - Statements of age and percentage.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Statements of age and... Distilled Spirits § 5.40 Statements of age and percentage. (a) Statements of age and percentage for whisky... more, statements of age and percentage are optional. As to all other whiskies there shall be stated the...

  3. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  4. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  5. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  6. On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.

    PubMed

    McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D

    2016-01-08

    We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.

  7. High variability in strain estimation errors when using a commercial ultrasound speckle tracking algorithm on tendon tissue.

    PubMed

    Fröberg, Åsa; Mårtensson, Mattias; Larsson, Matilda; Janerot-Sjöberg, Birgitta; D'Hooge, Jan; Arndt, Anton

    2016-10-01

    Ultrasound speckle tracking offers a non-invasive way of studying strain in the free Achilles tendon where no anatomical landmarks are available for tracking. This provides new possibilities for studying injury mechanisms during sport activity and the effects of shoes, orthotic devices, and rehabilitation protocols on tendon biomechanics. To investigate the feasibility of using a commercial ultrasound speckle tracking algorithm for assessing strain in tendon tissue. A polyvinyl alcohol (PVA) phantom, three porcine tendons, and a human Achilles tendon were mounted in a materials testing machine and loaded to 4% peak strain. Ultrasound long-axis cine-loops of the samples were recorded. Speckle tracking analysis of axial strain was performed using a commercial speckle tracking software. Estimated strain was then compared to reference strain known from the materials testing machine. Two frame rates and two region of interest (ROI) sizes were evaluated. Best agreement between estimated strain and reference strain was found in the PVA phantom (absolute error in peak strain: 0.21 ± 0.08%). The absolute error in peak strain varied between 0.72 ± 0.65% and 10.64 ± 3.40% in the different tendon samples. Strain determined with a frame rate of 39.4 Hz had lower errors than 78.6 Hz as was the case with a 22 mm compared to an 11 mm ROI. Errors in peak strain estimation showed high variability between tendon samples and were large in relation to strain levels previously described in the Achilles tendon. © The Foundation Acta Radiologica 2016.

  8. Investigating Absolute Value: A Real World Application

    ERIC Educational Resources Information Center

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  9. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    PubMed

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  10. Stimulus probability effects in absolute identification.

    PubMed

    Kent, Christopher; Lamberts, Koen

    2016-05-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of presentation probability on both proportion correct and response times. The effects were moderated by the ubiquitous stimulus position effect. The accuracy and response time data were predicted by an exemplar-based model of perceptual cognition (Kent & Lamberts, 2005). The bow in discriminability was also attenuated when presentation probability for middle items was relatively high, an effect that will constrain future model development. The study provides evidence for item-specific learning in absolute identification. Implications for other theories of absolute identification are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Absolute Position of Targets Measured Through a Chamber Window Using Lidar Metrology Systems

    NASA Technical Reports Server (NTRS)

    Kubalak, David; Hadjimichael, Theodore; Ohl, Raymond; Slotwinski, Anthony; Telfer, Randal; Hayden, Joseph

    2012-01-01

    Lidar is a useful tool for taking metrology measurements without the need for physical contact with the parts under test. Lidar instruments are aimed at a target using azimuth and elevation stages, then focus a beam of coherent, frequency modulated laser energy onto the target, such as the surface of a mechanical structure. Energy from the reflected beam is mixed with an optical reference signal that travels in a fiber path internal to the instrument, and the range to the target is calculated based on the difference in the frequency of the returned and reference signals. In cases when the parts are in extreme environments, additional steps need to be taken to separate the operator and lidar from that environment. A model has been developed that accurately reduces the lidar data to an absolute position and accounts for the three media in the testbed air, fused silica, and vacuum but the approach can be adapted for any environment or material. The accuracy of laser metrology measurements depends upon knowing the parameters of the media through which the measurement beam travels. Under normal conditions, this means knowledge of the temperature, pressure, and humidity of the air in the measurement volume. In the past, chamber windows have been used to separate the measuring device from the extreme environment within the chamber and still permit optical measurement, but, so far, only relative changes have been diagnosed. The ability to make accurate measurements through a window presents a challenge as there are a number of factors to consider. In the case of the lidar, the window will increase the time-of-flight of the laser beam causing a ranging error, and refract the direction of the beam causing angular positioning errors. In addition, differences in pressure, temperature, and humidity on each side of the window will cause slight atmospheric index changes and induce deformation and a refractive index gradient within the window. Also, since the window is a

  12. A Conceptual Approach to Absolute Value Equations and Inequalities

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Bryson, Janet L.

    2011-01-01

    The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…

  13. Regional biases in absolute sea-level estimates from tide gauge data due to residual unmodeled vertical land movement

    NASA Astrophysics Data System (ADS)

    King, Matt A.; Keshin, Maxim; Whitehouse, Pippa L.; Thomas, Ian D.; Milne, Glenn; Riva, Riccardo E. M.

    2012-07-01

    The only vertical land movement signal routinely corrected for when estimating absolute sea-level change from tide gauge data is that due to glacial isostatic adjustment (GIA). We compare modeled GIA uplift (ICE-5G + VM2) with vertical land movement at ˜300 GPS stations located near to a global set of tide gauges, and find regionally coherent differences of commonly ±0.5-2 mm/yr. Reference frame differences and signal due to present-day mass trends cannot reconcile these differences. We examine sensitivity to the GIA Earth model by fitting to a subset of the GPS velocities and find substantial regional sensitivity, but no single Earth model is able to reduce the disagreement in all regions. We suggest errors in ice history and neglected lateral Earth structure dominate model-data differences, and urge caution in the use of modeled GIA uplift alone when interpreting regional- and global- scale absolute (geocentric) sea level from tide gauge data.

  14. [Absolute and relative strength-endurance of the knee flexor and extensor muscles: a reliability study using the IsoMed 2000-dynamometer].

    PubMed

    Dirnberger, J; Wiesinger, H P; Stöggl, T; Kösters, A; Müller, E

    2012-09-01

    Isokinetic devices are highly rated in strength-related performance diagnosis. A few years ago, the broad variety of existing products was extended by the IsoMed 2000-dynamometer. In order for an isokinetic device to be clinically useful, the reliability of specific applications must be established. Although there have already been single studies on this topic for the IsoMed 2000 concerning maximum strength measurements, there has been no study regarding the assessment of strength-endurance so far. The aim of the present study was to establish the reliability for various methods of quantification of strength-endurance using the IsoMed 2000. A sample of 33 healthy young subjects (age: 23.8 ± 2.6 years) participated in one familiarisation and two testing sessions, 3-4 days apart. Testing consisted of a series 30 full effort concentric extension-flexion cycles of the right knee muscles at an angular velocity of 180 °/s. Based on the parameters Peak, Torque and Work for each repetition, indices of absolute (KADabs) and relative (KADrel) strength-endurance were derived. KADabs was calculated as the mean value of all testing repetitions, KADrel was determined in two ways: on the one hand, as the percentage decrease between the first and the last 5 repetitions (KADrelA) and on the other, as the negative slope derived from the linear regression equitation of all repetitions (KADrelB). Detection of systematic errors was performed using paired sample t-tests, relative and absolute reliability were examined using intraclass correlation coefficient (ICC 2.1) and standard error of measurement (SEM%), respectively. In general, for extension measurements concerning KADabs and - in an weakened form - KADrel high ICC -values of 0.76-0.89 combined with clinically acceptable values of SEM% of 1.2-5.9 % could be found. For flexion measurements this only applies to KADabs, whereas results for KADrel turned out to be clearly weaker with ICC- and SEM% values of 0.42-0.62 and 9

  15. Absolute pitch in a four-year-old boy with autism.

    PubMed

    Brenton, James N; Devries, Seth P; Barton, Christine; Minnich, Heike; Sokol, Deborah K

    2008-08-01

    Absolute pitch is the ability to identify the pitch of an isolated tone. We report on a 4-year-old boy with autism and absolute pitch, one of the youngest reported in the literature. Absolute pitch is thought to be attributable to a single gene, transmitted in an autosomal-dominant fashion. The association of absolute pitch with autism raises the speculation that this talent could be linked to a genetically distinct subset of children with autism. Further, the identification of absolute pitch in even young children with autism may lead to a lifelong skill.

  16. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    PubMed

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  17. 7 CFR 982.41 - Free and restricted percentages.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... percentages in effect at the end of the previous marketing year shall be applicable. [51 FR 29548, Aug. 19... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... WASHINGTON Order Regulating Handling Marketing Policy § 982.41 Free and restricted percentages. The free and...

  18. Identification and absolute quantification of enzymes in laundry detergents by liquid chromatography tandem mass spectrometry.

    PubMed

    Gaubert, Alexandra; Jeudy, Jérémy; Rougemont, Blandine; Bordes, Claire; Lemoine, Jérôme; Casabianca, Hervé; Salvador, Arnaud

    2016-07-01

    In a stricter legislative context, greener detergent formulations are developed. In this way, synthetic surfactants are frequently replaced by bio-sourced surfactants and/or used at lower concentrations in combination with enzymes. In this paper, a LC-MS/MS method was developed for the identification and quantification of enzymes in laundry detergents. Prior to the LC-MS/MS analyses, a specific sample preparation protocol was developed due to matrix complexity (high surfactant percentages). Then for each enzyme family mainly used in detergent formulations (protease, amylase, cellulase, and lipase), specific peptides were identified on a high resolution platform. A LC-MS/MS method was then developed in selected reaction monitoring (SRM) MS mode for the light and corresponding heavy peptides. The method was linear on the peptide concentration ranges 25-1000 ng/mL for protease, lipase, and cellulase; 50-1000 ng/mL for amylase; and 5-1000 ng/mL for cellulase in both water and laundry detergent matrices. The application of the developed analytical strategy to real commercial laundry detergents enabled enzyme identification and absolute quantification. For the first time, identification and absolute quantification of enzymes in laundry detergent was realized by LC-MS/MS in a single run. Graphical Abstract Identification and quantification of enzymes by LC-MS/MS.

  19. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    PubMed Central

    Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489

  20. The Absolute Spectrum Polarimeter (ASP)

    NASA Technical Reports Server (NTRS)

    Kogut, A. J.

    2010-01-01

    The Absolute Spectrum Polarimeter (ASP) is an Explorer-class mission to map the absolute intensity and linear polarization of the cosmic microwave background and diffuse astrophysical foregrounds over the full sky from 30 GHz to 5 THz. The principal science goal is the detection and characterization of linear polarization from an inflationary epoch in the early universe, with tensor-to-scalar ratio r much greater than 1O(raised to the power of { -3}) and Compton distortion y < 10 (raised to the power of{-6}). We describe the ASP instrument and mission architecture needed to detect the signature of an inflationary epoch in the early universe using only 4 semiconductor bolometers.

  1. The absolute disparity anomaly and the mechanism of relative disparities.

    PubMed

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-06-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1).

  2. The absolute disparity anomaly and the mechanism of relative disparities

    PubMed Central

    Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne

    2016-01-01

    There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566

  3. Using absolute gravimeter data to determine vertical gravity gradients

    USGS Publications Warehouse

    Robertson, D.S.

    2001-01-01

    The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.

  4. Rapid rotators revisited: absolute dimensions of KOI-13

    NASA Astrophysics Data System (ADS)

    Howarth, Ian D.; Morello, Giuseppe

    2017-09-01

    We analyse Kepler light-curves of the exoplanet Kepler Object of Interest no. 13b (KOI-13b) transiting its moderately rapidly rotating (gravity-darkened) parent star. A physical model, with minimal ad hoc free parameters, reproduces the time-averaged light-curve at the ˜10 parts per million level. We demonstrate that this Roche-model solution allows the absolute dimensions of the system to be determined from the star's projected equatorial rotation speed, ve sin I*, without any additional assumptions; we find a planetary radius RP = (1.33 ± 0.05) R♃, stellar polar radius Rp★ = (1.55 ± 0.06) R⊙, combined mass M* + MP( ≃ M*) = (1.47 ± 0.17) M⊙ and distance d ≃ (370 ± 25) pc, where the errors are dominated by uncertainties in relative flux contribution of the visual-binary companion KOI-13B. The implied stellar rotation period is within ˜5 per cent of the non-orbital, 25.43-hr signal found in the Kepler photometry. We show that the model accurately reproduces independent tomographic observations, and yields an offset between orbital and stellar-rotation angular-momentum vectors of 60.25° ± 0.05°.

  5. Preliminary Evidence for Reduced Post-Error Reaction Time Slowing in Hyperactive/Inattentive Preschool Children

    PubMed Central

    Berwid, Olga G.; Halperin, Jeffrey M.; Johnson, Ray E.; Marks, David J.

    2013-01-01

    Background Attention-Deficit/Hyperactivity Disorder has been associated with deficits in self-regulatory cognitive processes, some of which are thought to lie at the heart of the disorder. Slowing of reaction times (RTs) for correct responses following errors made during decision tasks has been interpreted as an indication of intact self-regulatory functioning and has been shown to be attenuated in school-aged children with ADHD. This study attempted to examine whether ADHD symptoms are associated with an early-emerging deficit in post-error slowing. Method A computerized two-choice RT task was administered to an ethnically diverse sample of preschool-aged children classified as either ‘control’ (n = 120) or ‘hyperactive/inattentive’ (HI; n = 148) using parent- and teacher-rated ADHD symptoms. Analyses were conducted to determine whether HI preschoolers exhibit a deficit in this self-regulatory ability. Results HI children exhibited reduced post-error slowing relative to controls on the trials selected for analysis. Supplementary analyses indicated that this may have been due to a reduced proportion of trials following errors on which HI children slowed rather than to a reduction in the absolute magnitude of slowing on all trials following errors. Conclusions High levels of ADHD symptoms in preschoolers may be associated with a deficit in error processing as indicated by post-error slowing. The results of supplementary analyses suggest that this deficit is perhaps more a result of failures to perceive errors than of difficulties with executive control. PMID:23387525

  6. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  7. Absolute Density Calibration Cell for Laser Induced Fluorescence Erosion Rate Measurements

    NASA Technical Reports Server (NTRS)

    Domonkos, Matthew T.; Stevens, Richard E.

    2001-01-01

    Flight qualification of ion thrusters typically requires testing on the order of 10,000 hours. Extensive knowledge of wear mechanisms and rates is necessary to establish design confidence prior to long duration tests. Consequently, real-time erosion rate measurements offer the potential both to reduce development costs and to enhance knowledge of the dependency of component wear on operating conditions. Several previous studies have used laser-induced fluorescence (LIF) to measure real-time, in situ erosion rates of ion thruster accelerator grids. Those studies provided only relative measurements of the erosion rate. In the present investigation, a molybdenum tube was resistively heated such that the evaporation rate yielded densities within the tube on the order of those expected from accelerator grid erosion. This work examines the suitability of the density cell as an absolute calibration source for LIF measurements, and the intrinsic error was evaluated.

  8. Introducing the Mean Absolute Deviation "Effect" Size

    ERIC Educational Resources Information Center

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  9. Monolithically integrated absolute frequency comb laser system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wanke, Michael C.

    2016-07-12

    Rather than down-convert optical frequencies, a QCL laser system directly generates a THz frequency comb in a compact monolithically integrated chip that can be locked to an absolute frequency without the need of a frequency-comb synthesizer. The monolithic, absolute frequency comb can provide a THz frequency reference and tool for high-resolution broad band spectroscopy.

  10. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  11. Resolution-enhancement and sampling error correction based on molecular absorption line in frequency scanning interferometry

    NASA Astrophysics Data System (ADS)

    Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating

    2018-06-01

    The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.

  12. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Electronic Absolute Cartesian Autocollimator

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2006-01-01

    An electronic absolute Cartesian autocollimator performs the same basic optical function as does a conventional all-optical or a conventional electronic autocollimator but differs in the nature of its optical target and the manner in which the position of the image of the target is measured. The term absolute in the name of this apparatus reflects the nature of the position measurement, which, unlike in a conventional electronic autocollimator, is based absolutely on the position of the image rather than on an assumed proportionality between the position and the levels of processed analog electronic signals. The term Cartesian in the name of this apparatus reflects the nature of its optical target. Figure 1 depicts the electronic functional blocks of an electronic absolute Cartesian autocollimator along with its basic optical layout, which is the same as that of a conventional autocollimator. Referring first to the optical layout and functions only, this or any autocollimator is used to measure the compound angular deviation of a flat datum mirror with respect to the optical axis of the autocollimator itself. The optical components include an illuminated target, a beam splitter, an objective or collimating lens, and a viewer or detector (described in more detail below) at a viewing plane. The target and the viewing planes are focal planes of the lens. Target light reflected by the datum mirror is imaged on the viewing plane at unit magnification by the collimating lens. If the normal to the datum mirror is parallel to the optical axis of the autocollimator, then the target image is centered on the viewing plane. Any angular deviation of the normal from the optical axis manifests itself as a lateral displacement of the target image from the center. The magnitude of the displacement is proportional to the focal length and to the magnitude (assumed to be small) of the angular deviation. The direction of the displacement is perpendicular to the axis about which the

  14. Holt-Winters Forecasting: A Study of Practical Applications for Healthcare Managers

    DTIC Science & Technology

    2006-05-25

    Winters Forecasting 5 List of Tables Table 1. Holt-Winters smoothing parameters and Mean Absolute Percentage Errors: Pseudoephedrine prescriptions Table 2...confidence intervals Holt-Winters Forecasting 6 List of Figures Figure 1. Line Plot of Pseudoephedrine Prescriptions forecast using smoothing parameters...The first represents monthly prescriptions of pseudoephedrine . Pseudoephedrine is a drug commonly prescribed to relieve nasal congestion and other

  15. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  16. Artificial neural network modelling of a large-scale wastewater treatment plant operation.

    PubMed

    Güçlü, Dünyamin; Dursun, Sükrü

    2010-11-01

    Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.

  17. Method of excess fractions with application to absolute distance metrology: wavelength selection and the effects of common error sources.

    PubMed

    Falaggis, Konstantinos; Towers, David P; Towers, Catherine E

    2012-09-20

    Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.

  18. Percentage extremity fat, but not percentage trunk fat, is lower in adolescent boys with anorexia nervosa than in healthy adolescents123

    PubMed Central

    Misra, Madhusmita; Katzman, Debra K; Cord, Jennalee; Manning, Stephanie J; Mickley, Diane; Herzog, David B; Miller, Karen K; Klibanski, Anne

    2013-01-01

    Background Anorexia nervosa (AN) is a condition of severe undernutrition associated with altered regional fat distribution in females. Although primarily a disease of females, AN is increasingly being recognized in males and is associated with hypogonadism. Testosterone is a major regulator of body composition in males, and testosterone administration in adults decreases visceral fat. However, the effect of low testosterone and other hormonal alterations on body composition in boys with AN is not known. Objective We hypothesized that testosterone deficiency in boys with AN is associated with higher trunk fat, as opposed to extremity fat, compared with control subjects. Design We assessed body composition using dual-energy X-ray absorptiometry and measured fasting testosterone, estradiol, insulin-like growth factor-1, leptin, and active ghrelin concentrations in 15 boys with AN and in 15 control subjects of comparable maturity aged 12–19 y. Results Fat and lean mass in AN boys was 69% and 86% of that in control subjects. Percentage extremity fat and extremity lean mass were lower in boys with AN (P = 0.003 and 0.0008); however, percentage trunk fat and the trunk to extremity fat ratio were higher after weight was adjusted for (P = 0.005 and 0.003). Testosterone concentrations were lower in boys with AN, and, on regression modeling, positively predicted percentage extremity lean mass and inversely predicted percentage trunk fat and trunk to extremity fat ratio. Other independent predictors of regional body composition were bone age and weight. Conclusions In adolescent boys with AN, higher percentage trunk fat, higher trunk to extremity fat ratio, lower percentage extremity fat, and lower extremity lean mass (adjusted for weight) are related to the hypogonadal state. PMID:19064506

  19. Geographic origin of publications in radiological journals as a function of GDP and percentage of GDP spent on research.

    PubMed

    Halpenny, Darragh; Burke, John; McNeill, Graeme; Snow, Aisling; Torreggiani, William C

    2010-06-01

    The aim of this study was to examine the geographic origin of publications in the highest impacting radiology journals and to examine the link between the percentage of gross domestic product (GDP) spent on research by a country and the output of radiology publications. The five highest impacting general radiology journals (according to the ISI Web of Knowledge database) were selected over a 6-year period from January 2002 to December 2007. Publications were totaled according to the country of the corresponding author. Publications (total and corrected for population size) were assessed according to the GDP of a given country and the percentage of GDP spent on research in that country. Correlation was determined using Spearman's rank. In total, 10,925 papers were identified. The top 10 nations produced 83.9% of the total number of papers. The United States was the most prolific country, with 41.7% of the total. The second-ranked and third-ranked countries were Germany (11.6%) and Japan (6.7%). Corrected for GDP, smaller European countries outperformed larger nations. Switzerland (0.925 publications per billion of GDP), Austria (0.694 publications per billion of GDP), and Belgium (0.648 publications per billion of GDP) produced the most papers per billion of GDP. When corrected for percentage of GDP spent on research, European countries again ranked highest, with Greece, Turkey, and Belgium having the best ratios. The percentage of GDP spent on research was positively correlated with the number of publications in high-ranking radiology journals (r = 0.603, P < .001). The United States is the most productive country in absolute number of publications. The flaws of using population size to compare publication output are clear, and a comparison using GDP and the percentage of GDP spent on research may give more meaningful results. When GDP is taken into consideration, smaller European countries are more productive. The importance of investment in radiologic research is

  20. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    NASA Technical Reports Server (NTRS)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  1. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  2. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  3. 39 CFR 3010.23 - Calculation of percentage change in rates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... class of mail, the percentage change in rates is calculated in three steps. First, the volume of each... 39 Postal Service 1 2011-07-01 2011-07-01 false Calculation of percentage change in rates. 3010.23... DOMINANT PRODUCTS Rules for Applying the Price Cap § 3010.23 Calculation of percentage change in rates. (a...

  4. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Mark; Tuen Mun Hospital, Hong Kong; Grehn, Melanie

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with themore » original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.« less

  5. Influence of different rotation angles in assessment of lung volumes by 3-dimensional sonography in comparison to magnetic resonance imaging in healthy fetuses.

    PubMed

    Kehl, Sven; Eckert, Sven; Sütterlin, Marc; Neff, K Wolfgang; Siemer, Jörn

    2011-06-01

    Three-dimensional (3D) sonographic volumetry is established in gynecology and obstetrics. Assessment of the fetal lung volume by magnetic resonance imaging (MRI) in congenital diaphragmatic hernias has become a routine examination. In vitro studies have shown a good correlation between 3D sonographic measurements and MRI. The aim of this study was to compare the lung volumes of healthy fetuses assessed by 3D sonography to MRI measurements and to investigate the impact of different rotation angles. A total of 126 fetuses between 20 and 40 weeks' gestation were measured by 3D sonography, and 27 of them were also assessed by MRI. The sonographic volumes were calculated by the rotational technique (virtual organ computer-aided analysis) with rotation angles of 6° and 30°. To evaluate the accuracy of 3D sonographic volumetry, percentage error and absolute percentage error values were calculated using MRI volumes as reference points. Formulas to calculate total, right, and left fetal lung volumes according to gestational age and biometric parameters were derived by stepwise regression analysis. Three-dimensional sonographic volumetry showed a high correlation compared to MRI (6° angle, R(2) = 0.971; 30° angle, R(2) = 0.917) with no systematic error for the 6° angle. Moreover, using the 6° rotation angle, the median absolute percentage error was significantly lower compared to the 30° angle (P < .001). The new formulas to calculate total lung volume in healthy fetuses only included gestational age and no biometric parameters (R(2) = 0.853). Three-dimensional sonographic volumetry of lung volumes in healthy fetuses showed a good correlation with MRI. We recommend using an angle of 6° because it assessed the lung volume more accurately. The specifically designed equations help estimate lung volumes in healthy fetuses.

  6. Is a shift from research on individual medical error to research on health information technology underway? A 40-year analysis of publication trends in medical journals.

    PubMed

    Erlewein, Daniel; Bruni, Tommaso; Gadebusch Bondio, Mariacarla

    2018-06-07

    In 1983, McIntyre and Popper underscored the need for more openness in dealing with errors in medicine. Since then, much has been written on individual medical errors. Furthermore, at the beginning of the 21st century, researchers and medical practitioners increasingly approached individual medical errors through health information technology. Hence, the question arises whether the attention of biomedical researchers shifted from individual medical errors to health information technology. We ran a study to determine publication trends concerning individual medical errors and health information technology in medical journals over the last 40 years. We used the Medical Subject Headings (MeSH) taxonomy in the database MEDLINE. Each year, we analyzed the percentage of relevant publications to the total number of publications in MEDLINE. The trends identified were tested for statistical significance. Our analysis showed that the percentage of publications dealing with individual medical errors increased from 1976 until the beginning of the 21st century but began to drop in 2003. Both the upward and the downward trends were statistically significant (P < 0.001). A breakdown by country revealed that it was the weight of the US and British publications that determined the overall downward trend after 2003. On the other hand, the percentage of publications dealing with health information technology doubled between 2003 and 2015. The upward trend was statistically significant (P < 0.001). The identified trends suggest that the attention of biomedical researchers partially shifted from individual medical errors to health information technology in the USA and the UK. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  7. Absolute tracer dye concentration using airborne laser-induced water Raman backscatter

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.; Swift, R. N.

    1981-01-01

    The use of simultaneous airborne-laser-induced dye fluorescence and water Raman backscatter to measure the absolute concentration of an ocean-dispersed tracer dye is discussed. Theoretical considerations of the calculation of dye concentration by the numerical comparison of airborne laser-induced fluorescence spectra with laboratory spectra for known dye concentrations using the 3400/cm OH-stretch water Raman scatter as a calibration signal are presented which show that minimum errors are obtained and no data concerning water mass transmission properties are required when the laser wavelength is chosen to yield a Raman signal near the dye emission band. Results of field experiments conducted with an airborne conical scan lidar over a site in New York Bight into which rhodamine dye had been injected in a study of oil spill dispersion are then indicated which resulted in a contour map of dye concentrations, with a minimum detectable dye concentration of approximately 2 ppb by weight.

  8. Dynamic Neural Correlates of Motor Error Monitoring and Adaptation during Trial-to-Trial Learning

    PubMed Central

    Tan, Huiling; Jenkinson, Ned

    2014-01-01

    A basic EEG feature upon voluntary movements in healthy human subjects is a β (13–30 Hz) band desynchronization followed by a postmovement event-related synchronization (ERS) over contralateral sensorimotor cortex. The functional implications of these changes remain unclear. We hypothesized that, because β ERS follows movement, it may reflect the degree of error in that movement, and the salience of that error to the task at hand. As such, the signal might underpin trial-to-trial modifications of the internal model that informs future movements. To test this hypothesis, EEG was recorded in healthy subjects while they moved a joystick-controlled cursor to visual targets on a computer screen, with different rotational perturbations applied between the joystick and cursor. We observed consistently lower β ERS in trials with large error, even when other possible motor confounds, such as reaction time, movement duration, and path length, were controlled, regardless of whether the perturbation was random or constant. There was a negative trial-to-trial correlation between the size of the absolute initial angular error and the amplitude of the β ERS, and this negative correlation was enhanced when other contextual information about the behavioral salience of the angular error, namely, the bias and variance of errors in previous trials, was additionally considered. These same features also had an impact on the behavioral performance. The findings suggest that the β ERS reflects neural processes that evaluate motor error and do so in the context of the prior history of errors. PMID:24741058

  9. Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197: 2013 Accuracy Criteria, Consensus and Surveillance Error Grid Analyses, and Insulin Dosing Error Modeling in a Hospital Setting.

    PubMed

    Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten

    2015-10-07

    Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding

    ERIC Educational Resources Information Center

    Ponce, Gregorio A.

    2008-01-01

    Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…

  12. Development of Na Adaptive Filter to Estimate the Percentage of Body Fat Based on Anthropometric Measures

    NASA Astrophysics Data System (ADS)

    do Lago, Naydson Emmerson S. P.; Kardec Barros, Allan; Sousa, Nilviane Pires S.; Junior, Carlos Magno S.; Oliveira, Guilherme; Guimares Polisel, Camila; Eder Carvalho Santana, Ewaldo

    2018-01-01

    This study aims to develop an algorithm of an adaptive filter to determine the percentage of body fat based on the use of anthropometric indicators in adolescents. Measurements such as body mass, height and waist circumference were collected for a better analysis. The development of this filter was based on the Wiener filter, used to produce an estimate of a random process. The Wiener filter minimizes the mean square error between the estimated random process and the desired process. The LMS algorithm was also studied for the development of the filter because it is important due to its simplicity and facility of computation. Excellent results were obtained with the filter developed, being these results analyzed and compared with the data collected.

  13. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  14. Planck absolute entropy of a rotating BTZ black hole

    NASA Astrophysics Data System (ADS)

    Riaz, S. M. Jawwad

    2018-04-01

    In this paper, the Planck absolute entropy and the Bekenstein-Smarr formula of the rotating Banados-Teitelboim-Zanelli (BTZ) black hole are presented via a complex thermodynamical system contributed by its inner and outer horizons. The redefined entropy approaches zero as the temperature of the rotating BTZ black hole tends to absolute zero, satisfying the Nernst formulation of a black hole. Hence, it can be regarded as the Planck absolute entropy of the rotating BTZ black hole.

  15. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  16. Error Analysis Of Students Working About Word Problem Of Linear Program With NEA Procedure

    NASA Astrophysics Data System (ADS)

    Santoso, D. A.; Farid, A.; Ulum, B.

    2017-06-01

    Evaluation and assessment is an important part of learning. In evaluation process of learning, written test is still commonly used. However, the tests usually do not following-up by further evaluation. The process only up to grading stage not to evaluate the process and errors which done by students. Whereas if the student has a pattern error and process error, actions taken can be more focused on the fault and why is that happen. NEA procedure provides a way for educators to evaluate student progress more comprehensively. In this study, students’ mistakes in working on some word problem about linear programming have been analyzed. As a result, mistakes are often made students exist in the modeling phase (transformation) and process skills (process skill) with the overall percentage distribution respectively 20% and 15%. According to the observations, these errors occur most commonly due to lack of precision of students in modeling and in hastiness calculation. Error analysis with students on this matter, it is expected educators can determine or use the right way to solve it in the next lesson.

  17. On the correct representation of bending and axial deformation in the absolute nodal coordinate formulation with an elastic line approach

    NASA Astrophysics Data System (ADS)

    Gerstmayr, Johannes; Irschik, Hans

    2008-12-01

    In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.

  18. The rate of cis-trans conformation errors is increasing in low-resolution crystal structures.

    PubMed

    Croll, Tristan Ian

    2015-03-01

    Cis-peptide bonds (with the exception of X-Pro) are exceedingly rare in native protein structures, yet a check for these is not currently included in the standard workflow for some common crystallography packages nor in the automated quality checks that are applied during submission to the Protein Data Bank. This appears to be leading to a growing rate of inclusion of spurious cis-peptide bonds in low-resolution structures both in absolute terms and as a fraction of solved residues. Most concerningly, it is possible for structures to contain very large numbers (>1%) of spurious cis-peptide bonds while still achieving excellent quality reports from MolProbity, leading to concerns that ignoring such errors is allowing software to overfit maps without producing telltale errors in, for example, the Ramachandran plot.

  19. 7 CFR 930.8 - Free market tonnage percentage cherries.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Free market tonnage percentage cherries. 930.8 Section 930.8 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING... Order Regulating Handling Definitions § 930.8 Free market tonnage percentage cherries. Free market...

  20. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 7 2011-04-01 2009-04-01 true Percentage depletion; general rule. 1.613-1 Section 1.613-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general...

  1. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    PubMed

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  2. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    PubMed

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  3. Novalis' Poetic Uncertainty: A "Bildung" with the Absolute

    ERIC Educational Resources Information Center

    Mika, Carl

    2016-01-01

    Novalis, the Early German Romantic poet and philosopher, had at the core of his work a mysterious depiction of the "absolute." The absolute is Novalis' name for a substance that defies precise knowledge yet calls for a tentative and sensitive speculation. How one asserts a truth, represents an object, and sets about encountering things…

  4. Population-based absolute risk estimation with survey data

    PubMed Central

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  5. Absolute marine gravimetry with matter-wave interferometry.

    PubMed

    Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F

    2018-02-12

    Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5  m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.

  6. Automated documentation error detection and notification improves anesthesia billing performance.

    PubMed

    Spring, Stephen F; Sandberg, Warren S; Anupama, Shaji; Walsh, John L; Driscoll, William D; Raines, Douglas E

    2007-01-01

    Documentation of key times and events is required to obtain reimbursement for anesthesia services. The authors installed an information management system to improve record keeping and billing performance but found that a significant number of their records still could not be billed in a timely manner, and some records were never billed at all because they contained documentation errors. Computer software was developed that automatically examines electronic anesthetic records and alerts clinicians to documentation errors by alphanumeric page and e-mail. The software's efficacy was determined retrospectively by comparing billing performance before and after its implementation. Staff satisfaction with the software was assessed by survey. After implementation of this software, the percentage of anesthetic records that could never be billed declined from 1.31% to 0.04%, and the median time to correct documentation errors decreased from 33 days to 3 days. The average time to release an anesthetic record to the billing service decreased from 3.0+/-0.1 days to 1.1+/-0.2 days. More than 90% of staff found the system to be helpful and easier to use than the previous manual process for error detection and notification. This system allowed the authors to reduce the median time to correct documentation errors and the number of anesthetic records that were never billed by at least an order of magnitude. The authors estimate that these improvements increased their department's revenue by approximately $400,000 per year.

  7. Absolute emission cross sections for electron capture reactions of C2+, N3+, N4+ and O3+ ions in collisions with Li(2s) atoms

    NASA Astrophysics Data System (ADS)

    Rieger, G.; Pinnington, E. H.; Ciubotariu, C.

    2000-12-01

    Absolute photon emission cross sections following electron capture reactions have been measured for C2+, N3+, N4+ and O3+ ions colliding with Li(2s) atoms at keV energies. The results are compared with calculations using the extended classical over-the-barrier model by Niehaus. We explore the limits of our experimental method and present a detailed discussion of experimental errors.

  8. Spelling errors among children with ADHD symptoms: the role of working memory.

    PubMed

    Re, Anna Maria; Mirandola, Chiara; Esposito, Stefania Sara; Capodieci, Agnese

    2014-09-01

    Research has shown that children with attention deficit/hyperactivity disorder (ADHD) may present a series of academic difficulties, including spelling errors. Given that correct spelling is supported by the phonological component of working memory (PWM), the present study examined whether or not the spelling difficulties of children with ADHD are emphasized when children's PWM is overloaded. A group of 19 children with ADHD symptoms (between 8 and 11 years of age), and a group of typically developing children matched for age, schooling, gender, rated intellectual abilities, and socioeconomic status, were administered two dictation texts: one under typical conditions and one under a pre-load condition that required the participants to remember a series of digits while writing. The results confirmed that children with ADHD symptoms have spelling difficulties, produce a higher percentages of errors compared to the control group children, and that these difficulties are enhanced under a higher load of PWM. An analysis of errors showed that this holds true, especially for phonological errors. The increased errors in the PWM condition was not due to a tradeoff between working memory and writing, as children with ADHD also performed more poorly in the PWM task. The theoretical and practical implications are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  10. The absolute dynamic ocean topography (ADOT)

    NASA Astrophysics Data System (ADS)

    Bosch, Wolfgang; Savcenko, Roman

    The sea surface slopes relative to the geoid (an equipotential surface) basically carry the in-formation on the absolute velocity field of the surface circulation. Pure oceanographic models may remain unspecific with respect to the absolute level of the ocean topography. In contrast, the geodetic approach to estimate the ocean topography as difference between sea level and the geoid gives by definition an absolute dynamic ocean topography (ADOT). This approach requires, however, a consistent treatment of geoid and sea surface heights, the first being usually derived from a band limited spherical harmonic series of the Earth gravity field and the second observed with much higher spectral resolution by satellite altimetry. The present contribution shows a procedure for estimating the ADOT along the altimeter profiles, preserving as much sea surface height details as the consistency w.r.t. the geoid heights will allow. The consistent treatment at data gaps and the coast is particular demanding and solved by a filter correction. The ADOT profiles are inspected for their innocent properties towards the coast and compared to external estimates of the ocean topography or the velocity field of the surface circulation as derived, for example, by ARGO floats.

  11. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Absolute configuration of a chiral CHD group via neutron diffraction: confirmation of the absolute stereochemistry of the enzymatic formation of malic acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bau, R.; Brewer, I.; Chiang, M.Y.

    Neutron diffraction has been used to monitor the absolute stereochemistry of an enzymatic reaction. (-)(2S)malic-3-d acid was prepared by the action of fumarase on fumaric acid in D/sub 2/O. After a large number of cations were screened, it was found that (+)(R)..cap alpha..-phenylethylamine forms the large crystals necessary for a neutron diffraction analysis. The subsequent structure determination showed that (+)(R)..cap alpha..-phenylethylammonium (-)(2S)malate-3-d has an absolute configuration of R at the CHD site. This result confirms the absolute stereochemistry of fumarate-to-malate transformation as catalyzed by the enzyme fumarase.

  13. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611...

  14. Absolute calibration of sniffer probes on Wendelstein 7-X

    NASA Astrophysics Data System (ADS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  15. Absolute calibration of sniffer probes on Wendelstein 7-X.

    PubMed

    Moseev, D; Laqua, H P; Marsen, S; Stange, T; Braune, H; Erckmann, V; Gellert, F; Oosterbeek, J W

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m(2) per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m(2) per MW injected beam power is measured.

  16. Preparation of an oakmoss absolute with reduced allergenic potential.

    PubMed

    Ehret, C; Maupetit, P; Petrzilka, M; Klecak, G

    1992-06-01

    Synopsis Oakmoss absolute, an extract of the lichen Evernia prunastri, is known to cause allergenic skin reactions due to the presence of certain aromatic aldehydes such as atranorin, chloratranorin, ethyl hematommate and ethyl chlorohematommate. In this paper it is shown that treatment of Oakmoss absolute with amino acids such as lysine and/or leucine, lowers considerably the content of these allergenic constituents including atranol and chloratranol. The resulting Oakmoss absolute, which exhibits an excellent olfactive quality, was tested extensively in comparative studies on guinea pigs and on man. The results of the Guinea Pig Maximization Test (GPMT) and Human Repeated Insult Patch Test (HRIPT) indicate that, in comparison with the commercial test sample, the allergenicity of this new quality of Oakmoss absolute was considerably reduced, and consequently better skin tolerance of this fragrance for man was achieved.

  17. Physics of negative absolute temperatures.

    PubMed

    Abraham, Eitan; Penrose, Oliver

    2017-01-01

    Negative absolute temperatures were introduced into experimental physics by Purcell and Pound, who successfully applied this concept to nuclear spins; nevertheless, the concept has proved controversial: a recent article aroused considerable interest by its claim, based on a classical entropy formula (the "volume entropy") due to Gibbs, that negative temperatures violated basic principles of statistical thermodynamics. Here we give a thermodynamic analysis that confirms the negative-temperature interpretation of the Purcell-Pound experiments. We also examine the principal arguments that have been advanced against the negative temperature concept; we find that these arguments are not logically compelling, and moreover that the underlying "volume" entropy formula leads to predictions inconsistent with existing experimental results on nuclear spins. We conclude that, despite the counterarguments, negative absolute temperatures make good theoretical sense and did occur in the experiments designed to produce them.

  18. Noncircular features in Saturn's rings IV: Absolute radius scale and Saturn's pole direction

    NASA Astrophysics Data System (ADS)

    French, Richard G.; McGhee-French, Colleen A.; Lonergan, Katherine; Sepersky, Talia; Jacobson, Robert A.; Nicholson, Philip D.; Hedman, Mathew M.; Marouf, Essam A.; Colwell, Joshua E.

    2017-07-01

    We present a comprehensive solution for the geometry of Saturn's ring system, based on orbital fits to an extensive set of occultation observations of 122 individual ring edges and gaps. We begin with a restricted set of very high quality Cassini VIMS, UVIS, and RSS measurements for quasi-circular features in the C and B rings and the Cassini Division, and then successively add suitably weighted additional Cassini and historical occultation measurements (from Voyager, HST and the widely-observed 28 Sgr occultation of 3 Jul 1989) for additional non-circular features, to derive an absolute radius scale applicable across the entire classical ring system. As part of our adopted solution, we determine first-order corrections to the spacecraft trajectories used to determine the geometry of individual occultation chords. We adopt a simple linear model for Saturn's precession, and our favored solution yields a precession rate on the sky n^˙P = 0.207 ± 0 .006‧‧yr-1 , equivalent to an angular rate of polar motion ΩP = 0.451 ± 0 .014‧‧yr-1 . The 3% formal uncertainty in the fitted precession rate is approaching the point where it can provide a useful constraint on models of Saturn's interior, although realistic errors are likely to be larger, given the linear approximation of the precession model and possible unmodeled systematic errors in the spacecraft ephemerides. Our results are largely consistent with independent estimates of the precession rate based on historical RPX times (Nicholson et al., 1999 AAS/Division for Planetary Sciences Meeting Abstracts #31 31, 44.01) and from theoretical expectations that account for Titan's 700-yr precession period (Vienne and Duriez 1992, Astronomy and Astrophysics 257, 331-352). The fitted precession rate based on Cassini data only is somewhat lower, which may be an indication of unmodeled shorter term contributions to Saturn's polar motion from other satellites, or perhaps the result of inconsistencies in the assumed

  19. [Value of the tritium test for determining the fat content in the body of rats].

    PubMed

    Pisarchuk, K L

    1990-01-01

    An indirect method for estimation of the fat percentage in the animal organism, a tritium test, was studied on laboratory male rats aged 4 and 12 months. Results obtained from the tritium test and direct chemical analysis were compared. With age a mean absolute error of the tritium test increased (from 1 to 8%) as against actual values of the water and fat percentage in the organism obtained by a direct chemical analysis. The data obtained testify to the relative insolvency of the tritium test, as well as the necessity to carry additional investigations in order to obtain adequate data.

  20. 12 CFR 226.14 - Determination of annual percentage rate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... percentage point above or below the annual percentage rate determined in accordance with this section. 31a... finance charge for the billing cycle by the sum of the balances to which the periodic rates were applied... of the balance(s) to which it is applicable 32 and multiplying the quotient (expressed as a...

  1. Using Modeling Tasks to Facilitate the Development of Percentages

    ERIC Educational Resources Information Center

    Shahbari, Juhaina Awawdeh; Peled, Irit

    2016-01-01

    This study analyzes the development of percentages knowledge by seventh graders given a sequence of activities starting with a realistic modeling task, in which students were expected to create a model that would facilitate the reinvention of percentages. In the first two activities, students constructed their own pricing model using fractions and…

  2. 14 CFR 1300.14 - Guarantee percentage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Guarantee percentage. 1300.14 Section 1300.14 Aeronautics and Space AIR TRANSPORTATION SYSTEM STABILIZATION OFFICE OF MANAGEMENT AND BUDGET AVIATION DISASTER RELIEF-AIR CARRIER GUARANTEE LOAN PROGRAM Minimum Requirements and Application Procedures...

  3. Combating omission errors through task analysis and good reminders.

    PubMed

    Reason, J

    2002-03-01

    Leaving out necessary task steps is the single most common human error type. Certain task steps possess characteristics that are more likely to provoke omissions than others, and can be identified in advance. The paper reports two studies. The first, involving a simple photocopier, established that failing to remove the last page of the original is the commonest omission. This step possesses four distinct error-provoking features that combine their effects in an additive fashion. The second study examined the degree to which everyday memory aids satisfy five features of a good reminder: conspicuity, contiguity, content, context, and countability. A close correspondence was found between the percentage use of strategies and the degree to which they satisfied these five criteria. A three stage omission management programme was outlined: task analysis (identifying discrete task steps) of some safety critical activity; assessing the omission likelihood of each step; and the choice and application of a suitable reminder. Such a programme is applicable to a variety of healthcare procedures.

  4. Identification of Hierarchies of Student Learning about Percentages Using Rasch Analysis

    ERIC Educational Resources Information Center

    Burfitt, Joan

    2013-01-01

    A review of the research literature indicated that there were probable orders in which students develop understandings and skills for calculating with percentages. Such calculations might include using models to represent percentages, knowing fraction equivalents, selection of strategies to solve problems and determination of percentage change. To…

  5. Coordinated joint motion control system with position error correction

    DOEpatents

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  6. Coordinated joint motion control system with position error correction

    DOEpatents

    Danko, George [Reno, NV

    2011-11-22

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  7. Reader variability in breast density estimation from full-field digital mammograms: the effect of image postprocessing on relative and absolute measures.

    PubMed

    Keller, Brad M; Nathan, Diane L; Gavenonis, Sara C; Chen, Jinbo; Conant, Emily F; Kontos, Despina

    2013-05-01

    Mammographic breast density, a strong risk factor for breast cancer, may be measured as either a relative percentage of dense (ie, radiopaque) breast tissue or as an absolute area from either raw (ie, "for processing") or vendor postprocessed (ie, "for presentation") digital mammograms. Given the increasing interest in the incorporation of mammographic density in breast cancer risk assessment, the purpose of this study is to determine the inherent reader variability in breast density assessment from raw and vendor-processed digital mammograms, because inconsistent estimates could to lead to misclassification of an individual woman's risk for breast cancer. Bilateral, mediolateral-oblique view, raw, and processed digital mammograms of 81 women were retrospectively collected for this study (N = 324 images). Mammographic percent density and absolute dense tissue area estimates for each image were obtained from two radiologists using a validated, interactive software tool. The variability of interreader agreement was not found to be affected by the image presentation style (ie, raw or processed, F-test: P > .5). Interreader estimates of relative and absolute breast density are strongly correlated (Pearson r > 0.84, P < .001) but systematically different (t-test, P < .001) between the two readers. Our results show that mammographic density may be assessed with equal reliability from either raw or vendor postprocessed images. Furthermore, our results suggest that the primary source of density variability comes from the subjectivity of the individual reader in assessing the absolute amount of dense tissue present in the breast, indicating the need to use standardized tools to mitigate this effect. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  8. Percentage entrainment of constituent loads in urban runoff, south Florida

    USGS Publications Warehouse

    Miller, R.A.

    1985-01-01

    Runoff quantity and quality data from four urban basins in south Florida were analyzed to determine the entrainment of total nitrogen, total phosphorus, total carbon, chemical oxygen demand, suspended solids, and total lead within the stormwater runoff. Land use of the homogeneously developed basins are residential (single family), highway, commercial, and apartment (multifamily). A computational procedure was used to calculate, for all storms that had water-quality data, the percentage of constituent load entrainment in specified depths of runoff. The plot of percentage of constituent load entrained as a function of runoff is termed the percentage-entrainment curve. Percentage-entrainment curves were developed for three different source areas of basin runoff: (1) the hydraulically effective impervious area, (2) the contributing area, and (3) the drainage area. With basin runoff expressed in inches over the contributing area, the depth of runoff required to remove 90 percent of the constituent load ranged from about 0.4 inch to about 1.4 inches; and to remove 80 percent, from about 0.3 to 0.9 inch. Analysis of variance, using depth of runoff from the contributing area as the response variable, showed that the factor 'basin' is statistically significant, but that the factor 'constituent' is not statistically significant in the forming of the percentage-entrainment curve. Evidently the sewerage design, whether elongated or concise in plan dictates the shape of the percentage-entrainment curve. The percentage-entrainment curves for all constituents were averaged for each basin and plotted against basin runoff for three source areas of runoff-the hydraulically effective impervious area, the contributing area, and the drainage area. The relative positions of the three curves are directly related to the relative sizes of the three source areas considered. One general percentage-entrainment curve based on runoff from the contributing area was formed by averaging across

  9. Absolute gravimetry for monitoring geodynamics in Greenland.

    NASA Astrophysics Data System (ADS)

    Nielsen, E.; Strykowski, G.; Forsberg, R.

    2015-12-01

    Here are presented the preliminary results of the absolute gravity measurements done in Greenland by DTU Space with their A10 absolute gravimeter (the A10-019). The purpose, besides establishing and maintaining a national gravity network, is to study geodynamics.The absolute gravity measurements are juxtaposed with the permanent GNET GNSS stations. The first measurements were conducted in 2009 and a few sites have been re-visited. As of present is there a gravity value at 18 GNET sites.There are challenges in interpreting the measurements from Greenland and several signals has to be taken into account, besides the geodynamical signals originating from the changing load of the ice, there is also a clear signal of direct attraction from different masses. Here are presented the preliminary results of our measurements in Greenland and attempts explain them through modelling of the geodynamical signals and the direct attraction from the ocean and ice.

  10. 27 CFR 5.40 - Statements of age and percentage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Distilled Spirits § 5.40 Statements of age and percentage. (a) Statements of age and percentage for whisky. In the case of straight whisky bottled in conformity with the bottled in bond labeling requirements and of domestic or foreign whisky, whether or not mixed or blended, all of which is 4 years old or...

  11. 27 CFR 5.40 - Statements of age and percentage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Distilled Spirits § 5.40 Statements of age and percentage. (a) Statements of age and percentage for whisky. In the case of straight whisky bottled in conformity with the bottled in bond labeling requirements and of domestic or foreign whisky, whether or not mixed or blended, all of which is 4 years old or...

  12. 7 CFR 4280.126 - Guarantee/annual renewal fee percentages.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Guarantee/annual renewal fee percentages. 4280.126... renewal fee percentages. (a) Fee ceilings. The maximum guarantee fee that may be charged is 1 percent. The maximum annual renewal fee that may be charged is 0.5 percent. The Agency will establish each year the...

  13. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method ofmore » reconciling any conflicting results from the two leakage tests.« less

  14. Correcting electrode modelling errors in EIT on realistic 3D head models.

    PubMed

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  15. Strongly nonlinear theory of rapid solidification near absolute stability

    NASA Astrophysics Data System (ADS)

    Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.

    2017-10-01

    We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the

  16. Automated body weight prediction of dairy cows using 3-dimensional vision.

    PubMed

    Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S

    2018-05-01

    The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  17. Absolute far-ultraviolet spectrophotometry of hot subluminous stars from Voyager

    NASA Technical Reports Server (NTRS)

    Holberg, J. B.; Ali, B.; Carone, T. E.; Polidan, R. S.

    1991-01-01

    Observations, obtained with the Voyager ultraviolet spectrometers, are presented of absolute fluxes for two well-known hot subluminous stars: BD + 28 deg 4211, an sdO, and G191 - B2B, a hot DA white dwarf. Complete absolute energy distributions for these two stars, from the Lyman limit at 912 A to 1 micron, are given. For BD + 28 deg 4211, a single power law closely represents the entire observed energy distribution. For G191 - B2B, a pure hydrogen model atmosphere provides an excellent match to the entire absolute energy distribution. Voyager absolute fluxes are discussed in relation to those reported from various sounding rocket experiments, including a recent rocket observation of BD + 28 deg 4211.

  18. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  19. From Hubble's NGSL to Absolute Fluxes

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don

    2012-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R-l000 spectra of 374 stars of assorted temperature, gravity, and metallicity. Each spectrum covers the wavelength range, 0.18-1.00 microns. The library can be viewed and/or downloaded from the website, http://archive.stsci.edu/prepds/stisngsll. Stars in the NGSL are now being used as absolute flux standards at ground-based observatories. However, the uncertainty in the absolute flux is about 2%, which does not meet the requirements of dark-energy surveys. We are therefore developing an observing procedure that should yield fluxes with uncertainties less than 1 % and will take part in an HST proposal to observe up to 15 stars using this new procedure.

  20. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  1. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less

  2. 13 CFR 400.203 - Guarantee percentage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Section 400.203 Business Credit and Assistance EMERGENCY STEEL GUARANTEE LOAN BOARD EMERGENCY STEEL GUARANTEE LOAN PROGRAM Steel Guarantee Loans § 400.203 Guarantee percentage. A guarantee issued by the Board may not exceed 85 percent of the amount of the principal of a loan to a Qualified Steel Company...

  3. 13 CFR 400.203 - Guarantee percentage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Section 400.203 Business Credit and Assistance EMERGENCY STEEL GUARANTEE LOAN BOARD EMERGENCY STEEL GUARANTEE LOAN PROGRAM Steel Guarantee Loans § 400.203 Guarantee percentage. A guarantee issued by the Board may not exceed 85 percent of the amount of the principal of a loan to a Qualified Steel Company...

  4. 13 CFR 400.203 - Guarantee percentage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Section 400.203 Business Credit and Assistance EMERGENCY STEEL GUARANTEE LOAN BOARD EMERGENCY STEEL GUARANTEE LOAN PROGRAM Steel Guarantee Loans § 400.203 Guarantee percentage. A guarantee issued by the Board may not exceed 85 percent of the amount of the principal of a loan to a Qualified Steel Company...

  5. 13 CFR 400.203 - Guarantee percentage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Section 400.203 Business Credit and Assistance EMERGENCY STEEL GUARANTEE LOAN BOARD EMERGENCY STEEL GUARANTEE LOAN PROGRAM Steel Guarantee Loans § 400.203 Guarantee percentage. A guarantee issued by the Board may not exceed 85 percent of the amount of the principal of a loan to a Qualified Steel Company...

  6. 13 CFR 400.203 - Guarantee percentage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Section 400.203 Business Credit and Assistance EMERGENCY STEEL GUARANTEE LOAN BOARD EMERGENCY STEEL GUARANTEE LOAN PROGRAM Steel Guarantee Loans § 400.203 Guarantee percentage. A guarantee issued by the Board may not exceed 85 percent of the amount of the principal of a loan to a Qualified Steel Company...

  7. Serum osteopontin concentration is decreased by exercise-induced fat loss but is not correlated with body fat percentage in obese humans.

    PubMed

    You, Jeong Soon; Ji, Hye-In; Chang, Kyung Ja; Yoo, Myung Chul; Yang, Hyung-In; Jeong, In-Kyung; Kim, Kyoung Soo

    2013-08-01

    To evaluate the extent to which fat mass contributes to serum osteopontin (OPN) concentration, we investigated whether serum OPN levels are decreased by exercise-induced fat mass loss and whether they are associated with body fat percentage in obese humans. Twenty‑three female college students were recruited to participate in an 8‑week body weight control program. Body composition [body weight, soft lean mass, body fat mass, body fat percentage, waist-hip ratio and body mass index (BMI)] were assessed prior to and following the program. Serum lipid profiles and serum adiponectin, leptin and osteopontin levels were measured from serum collected prior to and following the program. To understand the effect of fat mass loss on the serum levels of adipokine, which is mainly produced in adipose tissue, the leptin and adiponectin levels were also measured prior to and following the program. Serum leptin levels (mean ± standard error of the mean) decreased significantly following the program (from 9.82±0.98 to 7.23±0.67 ng/ml) and were closely correlated with body fat percentage. In addition, serum adiponectin levels were negatively correlated with body fat percentage, while serum adiponectin levels were not significantly altered. By contrast, serum OPN levels decreased significantly following the program (from 16.03±2.34 to 10.65±1.22 ng/ml). However, serum OPN levels were not correlated with body fat percentage, suggesting that serum OPN levels are controlled by several other factors in humans. In conclusion, a high expression of OPN in adipose tissues may not be correlated with serum OPN levels in obese humans. Thus, tissues or physiological factors other than fat mass may have a greater contribution to the serum OPN levels.

  8. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  9. Probative value of absolute and relative judgments in eyewitness identification.

    PubMed

    Clark, Steven E; Erickson, Michael A; Breneman, Jesse

    2011-10-01

    It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.

  10. Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.

    PubMed

    Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa

    2016-03-23

    We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.

  11. Determination of Absolute Zero Using a Computer-Based Laboratory

    ERIC Educational Resources Information Center

    Amrani, D.

    2007-01-01

    We present a simple computer-based laboratory experiment for evaluating absolute zero in degrees Celsius, which can be performed in college and undergraduate physical sciences laboratory courses. With a computer, absolute zero apparatus can help demonstrators or students to observe the relationship between temperature and pressure and use…

  12. Computationally Aided Absolute Stereochemical Determination of Enantioenriched Amines.

    PubMed

    Zhang, Jun; Gholami, Hadi; Ding, Xinliang; Chun, Minji; Vasileiou, Chrysoula; Nehira, Tatsuo; Borhan, Babak

    2017-03-17

    A simple and efficient protocol for sensing the absolute stereochemistry and enantiomeric excess of chiral monoamines is reported. Preparation of the sample requires a single-step reaction of the 1,1'-(bromomethylene)dinaphthalene (BDN) with the chiral amine. Analysis of the exciton coupled circular dichroism generated from the BDN-derivatized chiral amine sample, along with comparison to conformational analysis performed computationally, yields the absolute stereochemistry of the parent chiral monoamine.

  13. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system

    NASA Astrophysics Data System (ADS)

    Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  14. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system.

    PubMed

    Baltzer, M M; Craig, D; Den Hartog, D J; Nishizawa, T; Nornberg, M D

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  15. Lunch-time food choices in preschoolers: relationships between absolute and relative intake of different food categories, and appetitive characteristics and weight

    PubMed Central

    Carnell, S; Pryor, K; Mais, LA; Warkentin, S; Benson, L; Cheng, R

    2016-01-01

    Children’s appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5y olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from .78 to .91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child’s choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with

  16. Use of Absolute and Comparative Performance Feedback in Absolute and Comparative Judgments and Decisions

    ERIC Educational Resources Information Center

    Moore, Don A.; Klein, William M. P.

    2008-01-01

    Which matters more--beliefs about absolute ability or ability relative to others? This study set out to compare the effects of such beliefs on satisfaction with performance, self-evaluations, and bets on future performance. In Experiment 1, undergraduate participants were told they had answered 20% correct, 80% correct, or were not given their…

  17. Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China

    PubMed Central

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546

  18. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    PubMed

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  19. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  20. A Special Application of Absolute Value Techniques in Authentic Problem Solving

    ERIC Educational Resources Information Center

    Stupel, Moshe

    2013-01-01

    There are at least five different equivalent definitions of the absolute value concept. In instances where the task is an equation or inequality with only one or two absolute value expressions, it is a worthy educational experience for learners to solve the task using each one of the definitions. On the other hand, if more than two absolute value…

  1. Structure elucidation and absolute stereochemistry of isomeric monoterpene chromane esters.

    PubMed

    Batista, João M; Batista, Andrea N L; Mota, Jonas S; Cass, Quezia B; Kato, Massuo J; Bolzani, Vanderlan S; Freedman, Teresa B; López, Silvia N; Furlan, Maysa; Nafie, Laurence A

    2011-04-15

    Six novel monoterpene chromane esters were isolated from the aerial parts of Peperomia obtusifolia (Piperaceae) using chiral chromatography. This is the first time that chiral chromane esters of this kind, ones with a tethered chiral terpene, have been isolated in nature. Due to their structural features, it is not currently possible to assess directly their absolute stereochemistry using any of the standard classical approaches, such as X-ray crystallography, NMR, optical rotation, or electronic circular dichroism (ECD). Herein we report the absolute configuration of these molecules, involving four chiral centers, using vibrational circular dichroism (VCD) and density functional theory (DFT) (B3LYP/6-31G*) calculations. This work further reinforces the capability of VCD to determine unambiguously the absolute configuration of structurally complex molecules in solution, without crystallization or derivatization, and demonstrates the sensitivity of VCD to specify the absolute configuration for just one among a number of chiral centers. We also demonstrate the sufficiency of using the so-called inexpensive basis set 6-31G* compared to the triple-ζ basis set TZVP for absolute configuration analysis of larger molecules using VCD. Overall, this work extends our knowledge of secondary metabolites in plants and provides a straightforward way to determine the absolute configuration of complex natural products involving a chiral parent moiety combined with a chiral terpene adduct.

  2. Bio-Inspired Stretchable Absolute Pressure Sensor Network

    PubMed Central

    Guo, Yue; Li, Yu-Hung; Guo, Zhiqiang; Kim, Kyunglok; Chang, Fu-Kuo; Wang, Shan X.

    2016-01-01

    A bio-inspired absolute pressure sensor network has been developed. Absolute pressure sensors, distributed on multiple silicon islands, are connected as a network by stretchable polyimide wires. This sensor network, made on a 4’’ wafer, has 77 nodes and can be mounted on various curved surfaces to cover an area up to 0.64 m × 0.64 m, which is 100 times larger than its original size. Due to Micro Electro-Mechanical system (MEMS) surface micromachining technology, ultrathin sensing nodes can be realized with thicknesses of less than 100 µm. Additionally, good linearity and high sensitivity (~14 mV/V/bar) have been achieved. Since the MEMS sensor process has also been well integrated with a flexible polymer substrate process, the entire sensor network can be fabricated in a time-efficient and cost-effective manner. Moreover, an accurate pressure contour can be obtained from the sensor network. Therefore, this absolute pressure sensor network holds significant promise for smart vehicle applications, especially for unmanned aerial vehicles. PMID:26729134

  3. On the Perceptual Subprocess of Absolute Pitch.

    PubMed

    Kim, Seung-Goo; Knösche, Thomas R

    2017-01-01

    Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them.

  4. On the Perceptual Subprocess of Absolute Pitch

    PubMed Central

    Kim, Seung-Goo; Knösche, Thomas R.

    2017-01-01

    Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them. PMID:29085275

  5. The DiskMass Survey. II. Error Budget

    NASA Astrophysics Data System (ADS)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  6. Moral absolutism and ectopic pregnancy.

    PubMed

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.

  7. 13 CFR 126.701 - Can these subcontracting percentages requirements change?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Can these subcontracting percentages requirements change? 126.701 Section 126.701 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION HUBZONE PROGRAM Contract Performance Requirements § 126.701 Can these subcontracting percentages...

  8. 75 FR 35098 - Federal Employees' Retirement System; Normal Cost Percentages

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-21

    ... OFFICE OF PERSONNEL MANAGEMENT Federal Employees' Retirement System; Normal Cost Percentages...' Retirement System (FERS) Act of 1986. DATES: The revised normal cost percentages are effective at the... retirement system intended to cover most Federal employees hired after 1983. Most Federal employees hired...

  9. Absolute irradiance of the Moon for on-orbit calibration

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.; ,

    2002-01-01

    The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.

  10. High-resolution absolute position detection using a multiple grating

    NASA Astrophysics Data System (ADS)

    Schilling, Ulrich; Drabarek, Pawel; Kuehnle, Goetz; Tiziani, Hans J.

    1996-08-01

    To control electro-mechanical engines, high-resolution linear and rotary encoders are needed. Interferometric methods (grating interferometers) promise a resolution of a few nanometers, but have an ambiguity range of some microns. Incremental encoders increase the absolute measurement range by counting the signal periods starting from a defined initial point. In many applications, however, it is not possible to move to this initial point, so that absolute encoders have to be used. Absolute encoders generally have a scale with two or more tracks placed next to each other. Therefore, they use a two-dimensional grating structure to measure a one-dimensional position. We present a new method, which uses a one-dimensional structure to determine the position in one dimension. It is based on a grating with a large grating period up to some millimeters, having the same diffraction efficiency in several predefined diffraction orders (multiple grating). By combining the phase signals of the different diffraction orders, it is possible to establish the position in an absolute range of the grating period with a resolution like incremental grating interferometers. The principal functionality was demonstrated by applying the multiple grating in a heterodyne grating interferometer. The heterodyne frequency was generated by a frequency modulated laser in an unbalanced interferometer. In experimental measurements an absolute range of 8 mm was obtained while achieving a resolution of 10 nm.

  11. Percentage compensation arrangements: suspect, but not illegal.

    PubMed

    Fedor, F P

    2001-01-01

    Percentage compensation arrangements, in which a service is outsourced to a contractor that is paid in accordance with the level of its performance, are widely used in many business sectors. The HHS Office of Inspector General (OIG) has shown concern that these arrangements in the healthcare industry may offer incentives for the performance of unnecessary services or cause false claims to be made to Federal healthcare programs in violation of the antikickback statute and the False Claims Act. Percentage compensation arrangements can work and need not run afoul of the law as long as the healthcare organization carefully oversees the arrangement and sets specific safeguards in place. These safeguards include screening contractors, carefully evaluating their compliance programs, and obligating them contractually to perform within the limits of the law.

  12. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    PubMed

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. A modified technique to reduce tibial keel cutting errors during an Oxford unicompartmental knee arthroplasty.

    PubMed

    Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2017-03-01

    Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.

  14. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    PubMed

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  15. 76 FR 32242 - Federal Employees' Retirement System; Normal Cost Percentages

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ... OFFICE OF PERSONNEL MANAGEMENT Federal Employees' Retirement System; Normal Cost Percentages...' Retirement System (FERS) Act of 1986. DATES: The revised normal cost percentages are effective at the..., Public Law 99-335, created a new retirement system intended to cover most Federal employees hired after...

  16. Absolute Distance Measurement with the MSTAR Sensor

    NASA Technical Reports Server (NTRS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian

    2003-01-01

    The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.

  17. Absolute and Relative Socioeconomic Health Inequalities across Age Groups

    PubMed Central

    van Zon, Sander K. R.; Bültmann, Ute; Mendes de Leon, Carlos F.; Reijneveld, Sijmen A.

    2015-01-01

    Background The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender. Methods The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender. Results Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups. Conclusions Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of

  18. Some things ought never be done: moral absolutes in clinical ethics.

    PubMed

    Pellegrino, Edmund D

    2005-01-01

    Moral absolutes have little or no moral standing in our morally diverse modern society. Moral relativism is far more palatable for most ethicists and to the public at large. Yet, when pressed, every moral relativist will finally admit that there are some things which ought never be done. It is the rarest of moral relativists that will take rape, murder, theft, child sacrifice as morally neutral choices. In general ethics, the list of those things that must never be done will vary from person to person. In clinical ethics, however, the nature of the physician-patient relationship is such that certain moral absolutes are essential to the attainment of the good of the patient - the end of the relationship itself. These are all derivatives of the first moral absolute of all morality: Do good and avoid evil. In the clinical encounter, this absolute entails several subsidiary absolutes - act for the good of the patient, do not kill, keep promises, protect the dignity of the patient, do not lie, avoid complicity with evil. Each absolute is intrinsic to the healing and helping ends of the clinical encounter.

  19. Absolute and relative educational inequalities in depression in Europe.

    PubMed

    Dudal, Pieter; Bracke, Piet

    2016-09-01

    To investigate (1) the size of absolute and relative educational inequalities in depression, (2) their variation between European countries, and (3) their relationship with underlying prevalence rates. Analyses are based on the European Social Survey, rounds three and six (N = 57,419). Depression is measured using the shortened Centre of Epidemiologic Studies Depression Scale. Education is coded by use of the International Standard Classification of Education. Country-specific logistic regressions are applied. Results point to an elevated risk of depressive symptoms among the lower educated. The cross-national patterns differ between absolute and relative measurements. For men, large relative inequalities are found for countries including Denmark and Sweden, but are accompanied by small absolute inequalities. For women, large relative and absolute inequalities are found in Belgium, Bulgaria, and Hungary. Results point to an empirical association between inequalities and the underlying prevalence rates. However, the strength of the association is only moderate. This research stresses the importance of including both measurements for comparative research and suggests the inclusion of the level of population health in research into inequalities in health.

  20. Relativistic Absolutism in Moral Education.

    ERIC Educational Resources Information Center

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  1. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  2. Determining absolute protein numbers by quantitative fluorescence microscopy.

    PubMed

    Verdaasdonk, Jolien Suzanne; Lawrimore, Josh; Bloom, Kerry

    2014-01-01

    Biological questions are increasingly being addressed using a wide range of quantitative analytical tools to examine protein complex composition. Knowledge of the absolute number of proteins present provides insights into organization, function, and maintenance and is used in mathematical modeling of complex cellular dynamics. In this chapter, we outline and describe three microscopy-based methods for determining absolute protein numbers--fluorescence correlation spectroscopy, stepwise photobleaching, and ratiometric comparison of fluorescence intensity to known standards. In addition, we discuss the various fluorescently labeled proteins that have been used as standards for both stepwise photobleaching and ratiometric comparison analysis. A detailed procedure for determining absolute protein number by ratiometric comparison is outlined in the second half of this chapter. Counting proteins by quantitative microscopy is a relatively simple yet very powerful analytical tool that will increase our understanding of protein complex composition. © 2014 Elsevier Inc. All rights reserved.

  3. Comparison of disagreement and error rates for three types of interdepartmental consultations.

    PubMed

    Renshaw, Andrew A; Gould, Edwin W

    2005-12-01

    Previous studies have documented a relatively high rate of disagreement for interdepartmental consultations, but follow-up is limited. We reviewed the results of 3 types of interdepartmental consultations in our hospital during a 2-year period, including 328 incoming, 928 pathologist-generated outgoing, and 227 patient- or clinician-generated outgoing consults. The disagreement rate was significantly higher for incoming consults (10.7%) than for outgoing pathologist-generated consults (5.9%) (P = .06). Disagreement rates for outgoing patient- or clinician-generated consults were not significantly different from either other type (7.9%). Additional consultation, biopsy, or testing follow-up was available for 19 (54%) of 35, 14 (25%) of 55, and 6 (33%) of 18 incoming, outgoing pathologist-generated, and outgoing patient- or clinician-generated consults with disagreements, respectively; the percentage of errors varied widely (15/19 [79%], 8/14 [57%], and 2/6 [33%], respectively), but differences were not significant (P >.05 for each). Review of the individual errors revealed specific diagnostic areas in which improvement in performance might be made. Disagreement rates for interdepartmental consultation ranged from 5.9% to 10.7%, but only 33% to 79% represented errors. Additional consultation, tissue, and testing results can aid in distinguishing disagreements from errors.

  4. A Bargain Price for Teaching about Percentage

    ERIC Educational Resources Information Center

    Lo, Jane-Jane; Ko, Yi-Yin

    2013-01-01

    Middle school is a crucial transition period for students as they move from concrete to algebraic ways of thinking. This article describes a sequence of instruction geared toward helping prospective middle school instructors teach the topic of percentages.

  5. Comparison of technique errors of intraoral radiographs taken on film v photostimulable phosphor (PSP) plates.

    PubMed

    Zhang, Wenjian; Huynh, Carolyn P; Abramovitch, Kenneth; Leon, Inga-Lill K; Arvizu, Liliana

    2012-06-01

    The objective of this study was to compare the technical errors of intraoral radiographs exposed on film v photostimulable phosphor (PSP) plates. The intraoral radiographic images exposed on phantoms from preclinical practical exams of dental and dental hygiene students were used. Each exam consisted of 10 designated periapical and bitewing views. A total of 107 film sets and 122 PSP sets were evaluated for technique errors, including placement, elongation, foreshortening, overlapping, cone cut, receptor bending, density, mounting, dot in apical area, and others. Some errors were further subcategorized as minor, major, or remake depending on the severity. The percentages of radiographs with various errors were compared between film and PSP by the Fisher's Exact Test. Compared with film, there was significantly less PSP foreshortening, elongation, and bending errors, but significantly more placement and overlapping errors. Using a wrong sized receptor due to the similarity of the color of the package sleeves is a unique PSP error. Optimum image quality is attainable with PSP plates as well as film. When switching from film to a PSP digital environment, more emphasis is necessary for placing the PSP plates, especially those with excessive packet edge, and then correcting the corresponding angulation for the beam alignment. Better design for improving intraoral visibility and easy identification of different sized PSP will improve the clinician's technical performance with this receptor.

  6. Simulation of Dose to Surrounding Normal Structures in Tangential Breast Radiotherapy Due to Setup Error

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhakar, Ramachandran; Department of Nuclear Medicine, All India Institute of Medical Sciences, New Delhi; Department of Radiology, All India Institute of Medical Sciences, New Delhi

    Setup error plays a significant role in the final treatment outcome in radiotherapy. The effect of setup error on the planning target volume (PTV) and surrounding critical structures has been studied and the maximum allowed tolerance in setup error with minimal complications to the surrounding critical structure and acceptable tumor control probability is determined. Twelve patients were selected for this study after breast conservation surgery, wherein 8 patients were right-sided and 4 were left-sided breast. Tangential fields were placed on the 3-dimensional-computed tomography (3D-CT) dataset by isocentric technique and the dose to the PTV, ipsilateral lung (IL), contralateral lung (CLL),more » contralateral breast (CLB), heart, and liver were then computed from dose-volume histograms (DVHs). The planning isocenter was shifted for 3 and 10 mm in all 3 directions (X, Y, Z) to simulate the setup error encountered during treatment. Dosimetric studies were performed for each patient for PTV according to ICRU 50 guidelines: mean doses to PTV, IL, CLL, heart, CLB, liver, and percentage of lung volume that received a dose of 20 Gy or more (V20); percentage of heart volume that received a dose of 30 Gy or more (V30); and volume of liver that received a dose of 50 Gy or more (V50) were calculated for all of the above-mentioned isocenter shifts and compared to the results with zero isocenter shift. Simulation of different isocenter shifts in all 3 directions showed that the isocentric shifts along the posterior direction had a very significant effect on the dose to the heart, IL, CLL, and CLB, which was followed by the lateral direction. The setup error in isocenter should be strictly kept below 3 mm. The study shows that isocenter verification in the case of tangential fields should be performed to reduce future complications to adjacent normal tissues.« less

  7. Establishing Ion Ratio Thresholds Based on Absolute Peak Area for Absolute Protein Quantification using Protein Cleavage Isotope Dilution Mass Spectrometry

    PubMed Central

    Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.

    2014-01-01

    Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770

  8. 26 CFR 1.410(b)-5 - Average benefit percentage test.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... percentage of a group of employees for a testing period is the average of the employee benefit percentages... different definitions of average annual compensation; (C) Use of different testing ages; (D) Use of...) Restriction on use of separate testing group determination method. A plan does not satisfy the average benefit...

  9. Absolute empirical rate coefficient for the excitation of the 117.6 nm line in C III

    NASA Astrophysics Data System (ADS)

    Gardner, L. D.; Daw, A. N.; Janzen, P. H.; Atkins, N.; Kohl, J. L.

    2005-05-01

    We have measured the absolute cross sections for electron impact excitation (EIE) of C2+ (2s2p 3P° - 2p2 3P) for energies from below threshold to 17 eV above and derived EIE rate coefficients required for astrophysical applications. The uncertainty in the rate coefficient at a typical solar temperature of formation of C2+ is less than ± 6 %. Ions are produced in a 5 GHz Electron Cyclotron Resonance (ECR) ion source, extracted, formed into a beam, and transported to a collision chamber where they collide with electrons from an electron beam inclined at 45 degrees. The beams are modulated and the radiation from the decay of the excited ions at λ 117.6 nm is detected synchronously using an absolutely calibrated optical system that subtends slightly over π steradians. The fractional population of the C2+ metastable state in the incident ion beam has been determined experimentally to be 0.42 ± 0.03 (1.65 σ). At the reported ± 15 % total experimental uncertainty level (1.65 σ), the measured structure and absolute scale of the cross section are in fairly good agreement with 6-term close-coupling R-matrix calculations and 90-term R-matrix with pseudo-states calculations, although some minor differences are seen just above threshold. As density-sensitive line intensity ratios vary by only about a factor of 5 as the density changes by nearly a factor of 100, even a 30 % uncertainty in the excitation rate can lead to a factor of 3 error in density. This work is supported by NASA Supporting Research and Technology grants NAG5- 9516 and NAG5-12863 in Solar and Heliospheric Physics and by the Smithsonian Astrophysical Observatory.

  10. Comparison of Percentage of Syllables Stuttered With Parent-Reported Severity Ratings as a Primary Outcome Measure in Clinical Trials of Early Stuttering Treatment.

    PubMed

    Onslow, Mark; Jones, Mark; O'Brian, Sue; Packman, Ann; Menzies, Ross; Lowe, Robyn; Arnott, Simone; Bridgman, Kate; de Sonneville, Caroline; Franken, Marie-Christine

    2018-04-17

    This report investigates whether parent-reported stuttering severity ratings (SRs) provide similar estimates of effect size as percentage of syllables stuttered (%SS) for randomized trials of early stuttering treatment with preschool children. Data sets from 3 randomized controlled trials of an early stuttering intervention were selected for analyses. Analyses included median changes and 95% confidence intervals per treatment group, Bland-Altman plots, analysis of covariance, and Spearman rho correlations. Both SRs and %SS showed large effect sizes from pretreatment to follow-up, although correlations between the 2 measures were moderate at best. Absolute agreement between the 2 measures improved as percentage reduction of stuttering frequency and severity increased, probably due to innate measurement limitations for participants with low baseline severity. Analysis of covariance for the 3 trials showed consistent results. There is no statistical reason to favor %SS over parent-reported stuttering SRs as primary outcomes for clinical trials of early stuttering treatment. However, there are logistical reasons to favor parent-reported stuttering SRs. We conclude that parent-reported rating of the child's typical stuttering severity for the week or month prior to each assessment is a justifiable alternative to %SS as a primary outcome measure in clinical trials of early stuttering treatment.

  11. The Global Error Assessment (GEA) model for the selection of differentially expressed genes in microarray data.

    PubMed

    Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan

    2004-11-01

    Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R

  12. Essential Oils, Part VI: Sandalwood Oil, Ylang-Ylang Oil, and Jasmine Absolute.

    PubMed

    de Groot, Anton C; Schmidt, Erich

    In this article, some aspects of sandalwood oil, ylang-ylang oil, and jasmine absolute are discussed including their botanical origin, uses of the plants and the oils and absolute, chemical composition, contact allergy to and allergic contact dermatitis from these essential oils and absolute, and their causative allergenic ingredients.

  13. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    PubMed

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and

  14. Achievable accuracy of hip screw holding power estimation by insertion torque measurement.

    PubMed

    Erani, Paolo; Baleani, Massimiliano

    2018-02-01

    To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  16. Temporal Dynamics of Microbial Rhodopsin Fluorescence Reports Absolute Membrane Voltage

    PubMed Central

    Hou, Jennifer H.; Venkatachalam, Veena; Cohen, Adam E.

    2014-01-01

    Plasma membrane voltage is a fundamentally important property of a living cell; its value is tightly coupled to membrane transport, the dynamics of transmembrane proteins, and to intercellular communication. Accurate measurement of the membrane voltage could elucidate subtle changes in cellular physiology, but existing genetically encoded fluorescent voltage reporters are better at reporting relative changes than absolute numbers. We developed an Archaerhodopsin-based fluorescent voltage sensor whose time-domain response to a stepwise change in illumination encodes the absolute membrane voltage. We validated this sensor in human embryonic kidney cells. Measurements were robust to variation in imaging parameters and in gene expression levels, and reported voltage with an absolute accuracy of 10 mV. With further improvements in membrane trafficking and signal amplitude, time-domain encoding of absolute voltage could be applied to investigate many important and previously intractable bioelectric phenomena. PMID:24507604

  17. Interobserver error involved in independent attempts to measure cusp base areas of Pan M1s

    PubMed Central

    Bailey, Shara E; Pilbrow, Varsha C; Wood, Bernard A

    2004-01-01

    Cusp base areas measured from digitized images increase the amount of detailed quantitative information one can collect from post-canine crown morphology. Although this method is gaining wide usage for taxonomic analyses of extant and extinct hominoids, the techniques for digitizing images and taking measurements differ between researchers. The aim of this study was to investigate interobserver error in order to help assess the reliability of cusp base area measurement within extant and extinct hominoid taxa. Two of the authors measured individual cusp base areas and total cusp base area of 23 maxillary first molars (M1) of Pan. From these, relative cusp base areas were calculated. No statistically significant interobserver differences were found for either absolute or relative cusp base areas. On average the hypocone and paracone showed the least interobserver error (< 1%) whereas the protocone and metacone showed the most (2.6–4.5%). We suggest that the larger measurement error in the metacone/protocone is due primarily to either weakly defined fissure patterns and/or the presence of accessory occlusal features. Overall, levels of interobserver error are similar to those found for intraobserver error. The results of our study suggest that if certain prescribed standards are employed then cusp and crown base areas measured by different individuals can be pooled into a single database. PMID:15447691

  18. 242Pu absolute neutron-capture cross section measurement

    NASA Astrophysics Data System (ADS)

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; Bucher, B.; Chyzh, A.; Bredeweg, T. A.; Baramsai, B.; Couture, A.; Jandel, M.; Mosby, S.; O'Donnell, J. M.; Ullmann, J. L.

    2017-09-01

    The absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. During target fabrication, a small amount of 239Pu was added to the active target so that the absolute scale of the 242Pu(n,γ) cross section could be set according to the known 239Pu(n,f) resonance at En,R = 7.83 eV. The relative scale of the 242Pu(n,γ) cross section covers four orders of magnitude for incident neutron energies from thermal to ≈ 40 keV. The cross section reported in ENDF/B-VII.1 for the 242Pu(n,γ) En,R = 2.68 eV resonance was found to be 2.4% lower than the new absolute 242Pu(n,γ) cross section.

  19. [Prognostic value of absolute monocyte count in chronic lymphocytic leukaemia].

    PubMed

    Szerafin, László; Jakó, János; Riskó, Ferenc

    2015-04-01

    The low peripheral absolute lymphocyte and high monocyte count have been reported to correlate with poor clinical outcome in various lymphomas and other cancers. However, a few data known about the prognostic value of absolute monocyte count in chronic lymphocytic leukaemia. The aim of the authors was to investigate the impact of absolute monocyte count measured at the time of diagnosis in patients with chronic lymphocytic leukaemia on the time to treatment and overal survival. Between January 1, 2005 and December 31, 2012, 223 patients with newly-diagnosed chronic lymphocytic leukaemia were included. The rate of patients needing treatment, time to treatment, overal survival and causes of mortality based on Rai stages, CD38, ZAP-70 positivity and absolute monocyte count were analyzed. Therapy was necessary in 21.1%, 57.4%, 88.9%, 88.9% and 100% of patients in Rai stage 0, I, II, III an IV, respectively; in 61.9% and 60.8% of patients exhibiting CD38 and ZAP-70 positivity, respectively; and in 76.9%, 21.2% and 66.2% of patients if the absolute monocyte count was <0.25 G/l, between 0.25-0.75 G/l and >0.75 G/l, respectively. The median time to treatment and the median overal survival were 19.5, 65, and 35.5 months; and 41.5, 65, and 49.5 months according to the three groups of monocyte counts. The relative risk of beginning the therapy was 1.62 (p<0.01) in patients with absolute monocyte count <0.25 G/l or >0.75 G/l, as compared to those with 0.25-0.75 G/l, and the risk of overal survival was 2.41 (p<0.01) in patients with absolute monocyte count lower than 0.25 G/l as compared to those with higher than 0.25 G/l. The relative risks remained significant in Rai 0 patients, too. The leading causes of mortality were infections (41.7%) and the chronic lymphocytic leukaemia (58.3%) in patients with low monocyte count, while tumours (25.9-35.3%) and other events (48.1 and 11.8%) occurred in patients with medium or high monocyte counts. Patients with low and high monocyte

  20. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    ERIC Educational Resources Information Center

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  1. Infovigilance: reporting errors in official drug information sources.

    PubMed

    Fusier, Isabelle; Tollier, Corinne; Husson, Marie-Caroline

    2005-06-01

    The French drug database Thériaque (http://www.theriaque.org) developed by the (Centre National Hospitalier d'Information sur le Médicament) (CNHIM), is responsible for the dissemination of independent information about all drugs available in France. Each month the CNHIM pharmacists report problems due to inaccuracies in these sources to the French drug agency. In daily practice we devised the term "infovigilance": "Activity of error or inaccuracy notification in information sources which could be responsible for medication errors". The aim of this study was to evaluate the impact of CNHIM infovigilance on the contents of the Summary of Product Characteristics (SPCs). The study was a prospective study from 09/11/2001 to 31/12/2002. The problems related to the quality of information were classified into four types (inaccuracy/confusion, error/lack of information, discordance between SPC sections and discordance between generic SPCs). (1) Number of notifications and number of SPCs integrated into the database during the study period. (2) Percentage of notifications for each type: with or without potential patient impact, with or without later correction of the SPC, per section. 2.7% (85/3151) of SPCs integrated into the database were concerned by a notification of a problem. Notifications according to type of problem were inaccuracy/confusion (32%), error/lack of information (13%), discordance between SPC sections (27%) and discordance between generic SPCs (28%). 55% of problems were evaluated as 'likely to have an impact on the patient' and 45% as 'unlikely to have an impact on the patient'. 22 of problems which have been reported to the French drug agency were corrected and new updated SPCs were published with the corrections. Our efforts to improve the quality of drug information sources through a continuous "infovigilance" process need to be continued and extended to other information sources.

  2. Auditory working memory predicts individual differences in absolute pitch learning.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  3. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  4. Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.

    PubMed

    Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra

    2014-04-01

    To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Gill, Jatinder; Singh, Jagdev

    2018-07-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  6. Method and apparatus for two-dimensional absolute optical encoding

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2004-01-01

    This invention presents a two-dimensional absolute optical encoder and a method for determining position of an object in accordance with information from the encoder. The encoder of the present invention comprises a scale having a pattern being predetermined to indicate an absolute location on the scale, means for illuminating the scale, means for forming an image of the pattern; and detector means for outputting signals derived from the portion of the image of the pattern which lies within a field of view of the detector means, the field of view defining an image reference coordinate system, and analyzing means, receiving the signals from the detector means, for determining the absolute location of the object. There are two types of scale patterns presented in this invention: grid type and starfield type.

  7. New design and facilities for the International Database for Absolute Gravity Measurements (AGrav): A support for the Establishment of a new Global Absolute Gravity Reference System

    NASA Astrophysics Data System (ADS)

    Wziontek, Hartmut; Falk, Reinhard; Bonvalot, Sylvain; Rülke, Axel

    2017-04-01

    After about 10 years of successful joint operation by BGI and BKG, the International Database for Absolute Gravity Measurements "AGrav" (see references hereafter) was under a major revision. The outdated web interface was replaced by a responsive, high level web application framework based on Python and built on top of Pyramid. Functionality was added, like interactive time series plots or a report generator and the interactive map-based station overview was updated completely, comprising now clustering and the classification of stations. Furthermore, the database backend was migrated to PostgreSQL for better support of the application framework and long-term availability. As comparisons of absolute gravimeters (AG) become essential to realize a precise and uniform gravity standard, the database was extended to document the results on international and regional level, including those performed at monitoring stations equipped with SGs. By this it will be possible to link different AGs and to trace their equivalence back to the key comparisons under the auspices of International Committee for Weights and Measures (CIPM) as the best metrological realization of the absolute gravity standard. In this way the new AGrav database accommodates the demands of the new Global Absolute Gravity Reference System as recommended by the IAG Resolution No. 2 adopted in Prague 2015. The new database will be presented with focus on the new user interface and new functionality, calling all institutions involved in absolute gravimetry to participate and contribute with their information to built up a most complete picture of high precision absolute gravimetry and improve its visibility. A Digital Object Identifier (DOI) will be provided by BGI to contributors to give a better traceability and facilitate the referencing of their gravity surveys. Links and references: BGI mirror site : http://bgi.obs-mip.fr/data-products/Gravity-Databases/Absolute-Gravity-data/ BKG mirror site: http

  8. A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data

    NASA Astrophysics Data System (ADS)

    Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana

    2016-09-01

    A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water

  9. The good, the bad and the outliers: automated detection of errors and outliers from groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Peterson, Tim J.; Western, Andrew W.; Cheng, Xiang

    2018-03-01

    Suspicious groundwater-level observations are common and can arise for many reasons ranging from an unforeseen biophysical process to bore failure and data management errors. Unforeseen observations may provide valuable insights that challenge existing expectations and can be deemed outliers, while monitoring and data handling failures can be deemed errors, and, if ignored, may compromise trend analysis and groundwater model calibration. Ideally, outliers and errors should be identified but to date this has been a subjective process that is not reproducible and is inefficient. This paper presents an approach to objectively and efficiently identify multiple types of errors and outliers. The approach requires only the observed groundwater hydrograph, requires no particular consideration of the hydrogeology, the drivers (e.g. pumping) or the monitoring frequency, and is freely available in the HydroSight toolbox. Herein, the algorithms and time-series model are detailed and applied to four observation bores with varying dynamics. The detection of outliers was most reliable when the observation data were acquired quarterly or more frequently. Outlier detection where the groundwater-level variance is nonstationary or the absolute trend increases rapidly was more challenging, with the former likely to result in an under-estimation of the number of outliers and the latter an overestimation in the number of outliers.

  10. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  11. Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA

    NASA Astrophysics Data System (ADS)

    Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.

    2018-04-01

    This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.

  12. 237Np absolute delayed neutron yield measurements

    NASA Astrophysics Data System (ADS)

    Doré, D.; Ledoux, X.; Nolte, R.; Gagnon-Moisan, F.; Thulliez, L.; Litaize, O.; Roettger, S.; Serot, O.

    2017-09-01

    237Np absolute delayed neutron yields have been measured at different incident neutron energies from 1.5 to 16 MeV. The experiment was performed at the Physikalisch-Technische Bundesanstalt (PTB) facility where the Van de Graaff accelerator and the cyclotron CV28 delivered 9 different neutron energy beams using p+T, d+D and d+T reactions. The detection system is made up of twelve 3He tubes inserted into a polyethylene cylinder. In this paper, the experimental setup and the data analysis method are described. The evolution of the absolute DN yields as a function of the neutron incident beam energies are presented and compared to experimental data found in the literature and data from the libraries.

  13. An absolute measure for a key currency

    NASA Astrophysics Data System (ADS)

    Oya, Shunsuke; Aihara, Kazuyuki; Hirata, Yoshito

    It is generally considered that the US dollar and the euro are the key currencies in the world and in Europe, respectively. However, there is no absolute general measure for a key currency. Here, we investigate the 24-hour periodicity of foreign exchange markets using a recurrence plot, and define an absolute measure for a key currency based on the strength of the periodicity. Moreover, we analyze the time evolution of this measure. The results show that the credibility of the US dollar has not decreased significantly since the Lehman shock, when the Lehman Brothers bankrupted and influenced the economic markets, and has increased even relatively better than that of the euro and that of the Japanese yen.

  14. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  15. Non-Invasive Method of Determining Absolute Intracranial Pressure

    NASA Technical Reports Server (NTRS)

    Yost, William T. (Inventor); Cantrell, John H., Jr. (Inventor); Hargens, Alan E. (Inventor)

    2004-01-01

    A method is presented for determining absolute intracranial pressure (ICP) in a patient. Skull expansion is monitored while changes in ICP are induced. The patient's blood pressure is measured when skull expansion is approximately zero. The measured blood pressure is indicative of a reference ICP value. Subsequently, the method causes a known change in ICP and measured the change in skull expansion associated therewith. The absolute ICP is a function of the reference ICP value, the known change in ICP and its associated change in skull expansion; and a measured change in skull expansion.

  16. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    NASA Astrophysics Data System (ADS)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  17. Chemical composition of French mimosa absolute oil.

    PubMed

    Perriot, Rodolphe; Breme, Katharina; Meierhenrich, Uwe J; Carenini, Elise; Ferrando, Georges; Baldovini, Nicolas

    2010-02-10

    Since decades mimosa (Acacia dealbata) absolute oil has been used in the flavor and perfume industry. Today, it finds an application in over 80 perfumes, and its worldwide industrial production is estimated five tons per year. Here we report on the chemical composition of French mimosa absolute oil. Straight-chain analogues from C6 to C26 with different functional groups (hydrocarbons, esters, aldehydes, diethyl acetals, alcohols, and ketones) were identified in the volatile fraction. Most of them are long-chain molecules: (Z)-heptadec-8-ene, heptadecane, nonadecane, and palmitic acid are the most abundant, and constituents such as 2-phenethyl alcohol, methyl anisate, and ethyl palmitate are present in smaller amounts. The heavier constituents were mainly triterpenoids such as lupenone and lupeol, which were identified as two of the main components. (Z)-Heptadec-8-ene, lupenone, and lupeol were quantified by GC-MS in SIM mode using external standards and represents 6%, 20%, and 7.8% (w/w) of the absolute oil. Moreover, odorant compounds were extracted by SPME and analyzed by GC-sniffing leading to the perception of 57 odorant zones, of which 37 compounds were identified by their odorant description, mass spectrum, retention index, and injection of the reference compound.

  18. Measurement of absolute gravity acceleration in Firenze

    NASA Astrophysics Data System (ADS)

    de Angelis, M.; Greco, F.; Pistorio, A.; Poli, N.; Prevedelli, M.; Saccorotti, G.; Sorrentino, F.; Tino, G. M.

    2011-01-01

    This paper reports the results from the accurate measurement of the acceleration of gravity g taken at two separate premises in the Polo Scientifico of the University of Firenze (Italy). In these laboratories, two separate experiments aiming at measuring the Newtonian constant and testing the Newtonian law at short distances are in progress. Both experiments require an independent knowledge on the local value of g. The only available datum, pertaining to the italian zero-order gravity network, was taken more than 20 years ago at a distance of more than 60 km from the study site. Gravity measurements were conducted using an FG5 absolute gravimeter, and accompanied by seismic recordings for evaluating the noise condition at the site. The absolute accelerations of gravity at the two laboratories are (980 492 160.6 ± 4.0) μGal and (980 492 048.3 ± 3.0) μGal for the European Laboratory for Non-Linear Spectroscopy (LENS) and Dipartimento di Fisica e Astronomia, respectively. Other than for the two referenced experiments, the data here presented will serve as a benchmark for any future study requiring an accurate knowledge of the absolute value of the acceleration of gravity in the study region.

  19. Absolute Hugoniot measurements from a spherically convergent shock using x-ray radiography

    NASA Astrophysics Data System (ADS)

    Swift, Damian C.; Kritcher, Andrea L.; Hawreliak, James A.; Lazicki, Amy; MacPhee, Andrew; Bachmann, Benjamin; Döppner, Tilo; Nilsen, Joseph; Collins, Gilbert W.; Glenzer, Siegfried; Rothman, Stephen D.; Kraus, Dominik; Falcone, Roger W.

    2018-05-01

    The canonical high pressure equation of state measurement is to induce a shock wave in the sample material and measure two mechanical properties of the shocked material or shock wave. For accurate measurements, the experiment is normally designed to generate a planar shock which is as steady as possible in space and time, and a single state is measured. A converging shock strengthens as it propagates, so a range of shock pressures is induced in a single experiment. However, equation of state measurements must then account for spatial and temporal gradients. We have used x-ray radiography of spherically converging shocks to determine states along the shock Hugoniot. The radius-time history of the shock, and thus its speed, was measured by radiographing the position of the shock front as a function of time using an x-ray streak camera. The density profile of the shock was then inferred from the x-ray transmission at each instant of time. Simultaneous measurement of the density at the shock front and the shock speed determines an absolute mechanical Hugoniot state. The density profile was reconstructed using the known, unshocked density which strongly constrains the density jump at the shock front. The radiographic configuration and streak camera behavior were treated in detail to reduce systematic errors. Measurements were performed on the Omega and National Ignition Facility lasers, using a hohlraum to induce a spatially uniform drive over the outside of a solid, spherical sample and a laser-heated thermal plasma as an x-ray source for radiography. Absolute shock Hugoniot measurements were demonstrated for carbon-containing samples of different composition and initial density, up to temperatures at which K-shell ionization reduced the opacity behind the shock. Here we present the experimental method using measurements of polystyrene as an example.

  20. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  1. An absolute calibration system for millimeter-accuracy APOLLO measurements

    NASA Astrophysics Data System (ADS)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (< 10 ps) pulses that are locked to a cesium clock. In essence, the ACS delivers photons to the APOLLO detector at exquisitely well-defined time intervals as a ‘truth’ input against which APOLLO’s timing performance may be judged and corrected. Preliminary analysis indicates no inaccuracies in APOLLO data beyond the  ∼3 mm level, suggesting that historical APOLLO data are of high quality and motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  2. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  3. Laser interferometry method for absolute measurement of the acceleration of gravity

    NASA Technical Reports Server (NTRS)

    Hudson, O. K.

    1971-01-01

    Gravimeter permits more accurate and precise absolute measurement of g without reference to Potsdam values as absolute standards. Device is basically Michelson laser beam interferometer in which one arm is mass fitted with corner cube reflector.

  4. Quantum error correction for continuously detected errors with any number of error channels per qubit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt

    2004-08-01

    It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.

  5. Absolute Coefficients and the Graphical Representation of Airfoil Characteristics

    NASA Technical Reports Server (NTRS)

    Munk, Max

    1921-01-01

    It is argued that there should be an agreement as to what conventions to use in determining absolute coefficients used in aeronautics and in how to plot those coefficients. Of particular importance are the absolute coefficients of lift and drag. The author argues for the use of the German method over the kind in common use in the United States and England, and for the Continental over the usual American and British method of graphically representing the characteristics of an airfoil. The author notes that, on the whole, it appears that the use of natural absolute coefficients in a polar diagram is the logical method for presentation of airfoil characteristics, and that serious consideration should be given to the advisability of adopting this method in all countries, in order to advance uniformity and accuracy in the science of aeronautics.

  6. Peripheral absolute threshold spectral sensitivity in retinitis pigmentosa.

    PubMed Central

    Massof, R W; Johnson, M A; Finkelstein, D

    1981-01-01

    Dark-adapted spectral sensitivities were measured in the peripheral retinas of 38 patients diagnosed as having typical retinitis pigmentosa (RP) and in 3 normal volunteers. The patients included those having autosomal dominant and autosomal recessive inheritance patterns. Results were analysed by comparisons with the CIE standard scotopic spectral visibility function and with Judd's modification of the photopic spectral visibility function, with consideration of contributions from changes in spectral transmission of preretinal media. The data show 3 general patterns. One group of patients had absolute threshold spectral sensitivities that were fit by Judd's photopic visibility curve. Absolute threshold spectral sensitivities for a second group of patients were fit by a normal scotopic spectral visibility curve. The third group of patients had absolute threshold spectral sensitivities that were fit by a combination of scotopic and photopic spectral visibility curves. The autosomal dominant and autosomal recessive modes of inheritance were represented in each group of patients. These data indicate that RP patients have normal rod and/or cone spectral sensitivities, and support the subclassification of patients described previously by Massof and Finkelstein. PMID:7459312

  7. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    PubMed

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use

  8. Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors

    PubMed Central

    Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam

    2017-01-01

    Objectives Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. Methods This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. Results After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% (P <0.050). In particular, specimen identification errors decreased by 0.056%, with a 96.55% improvement. Conclusion Targeted educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors. PMID:29062553

  9. Lunch-time food choices in preschoolers: Relationships between absolute and relative intakes of different food categories, and appetitive characteristics and weight.

    PubMed

    Carnell, S; Pryor, K; Mais, L A; Warkentin, S; Benson, L; Cheng, R

    2016-08-01

    Children's appetitive characteristics measured by parent-report questionnaires are reliably associated with body weight, as well as behavioral tests of appetite, but relatively little is known about relationships with food choice. As part of a larger preloading study, we served 4-5year olds from primary school classes five school lunches at which they were presented with the same standardized multi-item meal. Parents completed Child Eating Behavior Questionnaire (CEBQ) sub-scales assessing satiety responsiveness (CEBQ-SR), food responsiveness (CEBQ-FR) and enjoyment of food (CEBQ-EF), and children were weighed and measured. Despite differing preload conditions, children showed remarkable consistency of intake patterns across all five meals with day-to-day intra-class correlations in absolute and percentage intake of each food category ranging from 0.78 to 0.91. Higher CEBQ-SR was associated with lower mean intake of all food categories across all five meals, with the weakest association apparent for snack foods. Higher CEBQ-FR was associated with higher intake of white bread and fruits and vegetables, and higher CEBQ-EF was associated with greater intake of all categories, with the strongest association apparent for white bread. Analyses of intake of each food group as a percentage of total intake, treated here as an index of the child's choice to consume relatively more or relatively less of each different food category when composing their total lunch-time meal, further suggested that children who were higher in CEBQ-SR ate relatively more snack foods and relatively less fruits and vegetables, while children with higher CEBQ-EF ate relatively less snack foods and relatively more white bread. Higher absolute intakes of white bread and snack foods were associated with higher BMI z score. CEBQ sub-scale associations with food intake variables were largely unchanged by controlling for daily metabolic needs. However, descriptive comparisons of lunch intakes with

  10. Study on the refractive errors of school going children of Pokhara city in Nepal.

    PubMed

    Niroula, D R; Saha, C G

    Refractive errors are the one of the most common visual disorders found worldwide in school going children and also it is one of the causes of blindness. It can easily be prevented, if timely proper measures are taken. In Kathmandu valley and Mechi Zone of Nepal, the distribution of refractive errors was found to be very high. No records are available from the Western part of Nepal. Considering the importance of the refractive errors the present study had been undertaken in Pokhara city. 964 subjects (474 boys, 490 girls) were selected between age groups 10 to 19 years from 6 schools representing different region of Pokhara. After Preliminary examination: on acuity of vision with Snellen's and Jaeger's charts, the subjects were referred to the Manipal Teaching Hospital, Pokhara for confirmation of the refractive errors. Sixty two schools children (6.43%), out of 964 had refractive errors. The myopia was found to be most common (4.05%). The refractive errors were found more in Private school children (9.29%) than Government school children (4.23%), which is statistically significant (P < 0.05). More boys (7.59%) were found to have suffered from refractive errors than girls (5.31%). Further, children with vegetarian diet (10.52%) had greater number of refractive errors than non-vegetarian diet children (6.17%). In the present study, percentage distribution of myopia was found to be higher (4.05%) than the hyperopia (1.24%) and astigmatism (1.14%). Interestingly, in the present study the refractive errors were found significantly higher in Private schools children than Government schools because the children who read in Private schools have higher socioeconomic status; spend more time in home work, watching Television and Computer as compared to government schools children. These near activities of the eyes causes stress on eyes of the children and might be one of the causes of developing myopia.

  11. Neural Sensitivity to Absolute and Relative Anticipated Reward in Adolescents

    PubMed Central

    Vaidya, Jatin G.; Knutson, Brian; O'Leary, Daniel S.; Block, Robert I.; Magnotta, Vincent

    2013-01-01

    Adolescence is associated with a dramatic increase in risky and impulsive behaviors that have been attributed to developmental differences in neural processing of rewards. In the present study, we sought to identify age differences in anticipation of absolute and relative rewards. To do so, we modified a commonly used monetary incentive delay (MID) task in order to examine brain activity to relative anticipated reward value (neural sensitivity to the value of a reward as a function of other available rewards). This design also made it possible to examine developmental differences in brain activation to absolute anticipated reward magnitude (the degree to which neural activity increases with increasing reward magnitude). While undergoing fMRI, 18 adolescents and 18 adult participants were presented with cues associated with different reward magnitudes. After the cue, participants responded to a target to win money on that trial. Presentation of cues was blocked such that two reward cues associated with $.20, $1.00, or $5.00 were in play on a given block. Thus, the relative value of the $1.00 reward varied depending on whether it was paired with a smaller or larger reward. Reflecting age differences in neural responses to relative anticipated reward (i.e., reference dependent processing), adults, but not adolescents, demonstrated greater activity to a $1 reward when it was the larger of the two available rewards. Adults also demonstrated a more linear increase in ventral striatal activity as a function of increasing absolute reward magnitude compared to adolescents. Additionally, reduced ventral striatal sensitivity to absolute anticipated reward (i.e., the difference in activity to medium versus small rewards) correlated with higher levels of trait Impulsivity. Thus, ventral striatal activity in anticipation of absolute and relative rewards develops with age. Absolute reward processing is also linked to individual differences in Impulsivity. PMID:23544046

  12. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.

  13. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  14. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; ...

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from

  15. Analytical minimization of synchronicity errors in stochastic identification

    NASA Astrophysics Data System (ADS)

    Bernal, D.

    2018-01-01

    An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.

  16. Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans

    PubMed Central

    Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa

    2016-01-01

    Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573

  17. Absolute flux density calibrations of radio sources: 2.3 GHz

    NASA Technical Reports Server (NTRS)

    Freiley, A. J.; Batelaan, P. D.; Bathker, D. A.

    1977-01-01

    A detailed description of a NASA/JPL Deep Space Network program to improve S-band gain calibrations of large aperture antennas is reported. The program is considered unique in at least three ways; first, absolute gain calibrations of high quality suppressed-sidelobe dual mode horns first provide a high accuracy foundation to the foundation to the program. Second, a very careful transfer calibration technique using an artificial far-field coherent-wave source was used to accurately obtain the gain of one large (26 m) aperture. Third, using the calibrated large aperture directly, the absolute flux density of five selected galactic and extragalactic natural radio sources was determined with an absolute accuracy better than 2 percent, now quoted at the familiar 1 sigma confidence level. The follow-on considerations to apply these results to an operational network of ground antennas are discussed. It is concluded that absolute gain accuracies within + or - 0.30 to 0.40 db are possible, depending primarily on the repeatability (scatter) in the field data from Deep Space Network user stations.

  18. Assessing epistemic sophistication by considering domain-specific absolute and multiplicistic beliefs separately.

    PubMed

    Peter, Johannes; Rosman, Tom; Mayer, Anne-Kathrin; Leichner, Nikolas; Krampen, Günter

    2016-06-01

    Particularly in higher education, not only a view of science as a means of finding absolute truths (absolutism), but also a view of science as generally tentative (multiplicism) can be unsophisticated and obstructive for learning. Most quantitative epistemic belief inventories neglect this and understand epistemic sophistication as disagreement with absolute statements. This article suggests considering absolutism and multiplicism as separate dimensions. Following our understanding of epistemic sophistication as a cautious and reluctant endorsement of both positions, we assume evaluativism (a contextually adaptive view of knowledge as personally constructed and evidence-based) to be reflected by low agreement with both generalized absolute and generalized multiplicistic statements. Three studies with a total sample size of N = 416 psychology students were conducted. A domain-specific inventory containing both absolute and multiplicistic statements was developed. Expectations were tested by exploratory factor analysis, confirmatory factor analysis, and correlational analyses. Results revealed a two-factor solution with an absolute and a multiplicistic factor. Criterion validity of both factors was confirmed. Cross-sectional analyses revealed that agreement to generalized multiplicistic statements decreases with study progress. Moreover, consistent with our understanding of epistemic sophistication as a reluctant attitude towards generalized epistemic statements, evidence for a negative relationship between epistemic sophistication and need for cognitive closure was found. We recommend including multiplicistic statements into epistemic belief questionnaires and considering them as a separate dimension, especially when investigating individuals in later stages of epistemic development (i.e., in higher education). © 2015 The British Psychological Society.

  19. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    NASA Technical Reports Server (NTRS)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  20. Absolute photon-flux measurements in the vacuum ultraviolet

    NASA Technical Reports Server (NTRS)

    Samson, J. A. R.; Haddad, G. N.

    1974-01-01

    Absolute photon-flux measurements in the vacuum ultraviolet have extended to short wavelengths by use of rare-gas ionization chambers. The technique involves the measurement of the ion current as a function of the gas pressure in the ion chamber. The true value of the ion current, and hence the absolute photon flux, is obtained by extrapolating the ion current to zero gas pressure. Examples are given at 162 and 266 A. The short-wavelength limit is determined only by the sensitivity of the current-measuring apparatus and by present knowledge of the photoionization processes that occur in the rate gases.

  1. Stimulus Probability Effects in Absolute Identification

    ERIC Educational Resources Information Center

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  2. Absolute gravity measurements in California

    NASA Astrophysics Data System (ADS)

    Zumberge, M. A.; Sasagawa, G.; Kappus, M.

    1986-08-01

    An absolute gravity meter that determines the local gravitational acceleration by timing a freely falling mass with a laser interferometer has been constructed. The instrument has made measurements at 11 sites in California, four in Nevada, and one in France. The uncertainty in the results is typically 10 microgal. Repeated measurements have been made at several of the sites; only one shows a substantial change in gravity.

  3. Relational versus absolute representation in categorization.

    PubMed

    Edwards, Darren J; Pothos, Emmanuel M; Perlman, Amotz

    2012-01-01

    This study explores relational-like and absolute-like representations in categorization. Although there is much evidence that categorization processes can involve information about both the particular physical properties of studied instances and abstract (relational) properties, there has been little work on the factors that lead to one kind of representation as opposed to the other. We tested 370 participants in 6 experiments, in which participants had to classify new items into predefined artificial categories. In 4 experiments, we observed a predominantly relational-like mode of classification, and in 2 experiments we observed a shift toward an absolute-like mode of classification. These results suggest 3 factors that promote a relational-like mode of classification: fewer items per group, more training groups, and the presence of a time delay. Overall, we propose that less information about the distributional properties of a category or weaker memory traces for the category exemplars (induced, e.g., by having smaller categories or a time delay) can encourage relational-like categorization.

  4. High-fidelity target sequencing of individual molecules identified using barcode sequences: de novo detection and absolute quantitation of mutations in plasma cell-free DNA from cancer patients.

    PubMed

    Kukita, Yoji; Matoba, Ryo; Uchida, Junji; Hamakawa, Takuya; Doki, Yuichiro; Imamura, Fumio; Kato, Kikuya

    2015-08-01

    Circulating tumour DNA (ctDNA) is an emerging field of cancer research. However, current ctDNA analysis is usually restricted to one or a few mutation sites due to technical limitations. In the case of massively parallel DNA sequencers, the number of false positives caused by a high read error rate is a major problem. In addition, the final sequence reads do not represent the original DNA population due to the global amplification step during the template preparation. We established a high-fidelity target sequencing system of individual molecules identified in plasma cell-free DNA using barcode sequences; this system consists of the following two steps. (i) A novel target sequencing method that adds barcode sequences by adaptor ligation. This method uses linear amplification to eliminate the errors introduced during the early cycles of polymerase chain reaction. (ii) The monitoring and removal of erroneous barcode tags. This process involves the identification of individual molecules that have been sequenced and for which the number of mutations have been absolute quantitated. Using plasma cell-free DNA from patients with gastric or lung cancer, we demonstrated that the system achieved near complete elimination of false positives and enabled de novo detection and absolute quantitation of mutations in plasma cell-free DNA. © The Author 2015. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  5. VizieR Online Data Catalog: R absolute magnitudes of Kuiper Belt objects (Peixinho+, 2012)

    NASA Astrophysics Data System (ADS)

    Peixinho, N.; Delsanti, A.; Guilbert-Lepoutre, A.; Gafeira, R.; Lacerda, P.

    2012-06-01

    Compilation of absolute magnitude HRα, B-R color spectral features used in this work. For each object, we computed the average color index from the different papers presenting data obtained simultaneously in B and R bands (e.g. contiguous observations within a same night). When individual R apparent magnitude and date were available, we computed the HRα=R-5log(r Delta), where R is the R-band magnitude, r and Delta are the helio- and geocentric distances at the time of observation in AU, respectively. When V and V-R colors were available, we derived an R and then HRα value. We did not correct for the phase-angle α effect. This table includes also spectral information on the presence of water ice, methanol, methane, or confirmed featureless spectra, as available in the literature. We highlight only the cases with clear bands in the spectrum, which were reported/confirmed by some other work. The 1st column indicates the object identification number and name or provisional designation; the 2nd column indicates the dynamical class; the 3rd column indicates the average HRα value and 1-σ error bars; the 4th column indicates the average $B-R$ color and 1-σ error bars; the 5th column indicates the most important spectral features detected; and the 6th column points to the bibliographic references used for each object. (3 data files).

  6. 28 CFR 19.4 - Cost and percentage estimates.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RECOVERY OF MISSING CHILDREN § 19.4 Cost and percentage estimates. It is estimated that this program will... administrative costs. It is DOJ's objective that 50 percent of DOJ penalty mail contain missing children...

  7. Numerical Experiments in Error Control for Sound Propagation Using a Damping Layer Boundary Treatment

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    2017-01-01

    This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].

  8. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  9. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    PubMed

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  10. 10 CFR 490.706 - Procedure for modifying the biodiesel component percentage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Procedure for modifying the biodiesel component percentage... TRANSPORTATION PROGRAM Biodiesel Fuel Use Credit § 490.706 Procedure for modifying the biodiesel component percentage. (a) DOE may, by rule, lower the 20 percent biodiesel volume requirement of this subpart for...

  11. 10 CFR 490.706 - Procedure for modifying the biodiesel component percentage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Procedure for modifying the biodiesel component percentage... TRANSPORTATION PROGRAM Biodiesel Fuel Use Credit § 490.706 Procedure for modifying the biodiesel component percentage. (a) DOE may, by rule, lower the 20 percent biodiesel volume requirement of this subpart for...

  12. 10 CFR 490.706 - Procedure for modifying the biodiesel component percentage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Procedure for modifying the biodiesel component percentage... TRANSPORTATION PROGRAM Biodiesel Fuel Use Credit § 490.706 Procedure for modifying the biodiesel component percentage. (a) DOE may, by rule, lower the 20 percent biodiesel volume requirement of this subpart for...

  13. 10 CFR 490.706 - Procedure for modifying the biodiesel component percentage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Procedure for modifying the biodiesel component percentage... TRANSPORTATION PROGRAM Biodiesel Fuel Use Credit § 490.706 Procedure for modifying the biodiesel component percentage. (a) DOE may, by rule, lower the 20 percent biodiesel volume requirement of this subpart for...

  14. 10 CFR 490.706 - Procedure for modifying the biodiesel component percentage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Procedure for modifying the biodiesel component percentage... TRANSPORTATION PROGRAM Biodiesel Fuel Use Credit § 490.706 Procedure for modifying the biodiesel component percentage. (a) DOE may, by rule, lower the 20 percent biodiesel volume requirement of this subpart for...

  15. Standardization approaches in absolute quantitative proteomics with mass spectrometry.

    PubMed

    Calderón-Celis, Francisco; Encinar, Jorge Ruiz; Sanz-Medel, Alfredo

    2017-07-31

    Mass spectrometry-based approaches have enabled important breakthroughs in quantitative proteomics in the last decades. This development is reflected in the better quantitative assessment of protein levels as well as to understand post-translational modifications and protein complexes and networks. Nowadays, the focus of quantitative proteomics shifted from the relative determination of proteins (ie, differential expression between two or more cellular states) to absolute quantity determination, required for a more-thorough characterization of biological models and comprehension of the proteome dynamism, as well as for the search and validation of novel protein biomarkers. However, the physico-chemical environment of the analyte species affects strongly the ionization efficiency in most mass spectrometry (MS) types, which thereby require the use of specially designed standardization approaches to provide absolute quantifications. Most common of such approaches nowadays include (i) the use of stable isotope-labeled peptide standards, isotopologues to the target proteotypic peptides expected after tryptic digestion of the target protein; (ii) use of stable isotope-labeled protein standards to compensate for sample preparation, sample loss, and proteolysis steps; (iii) isobaric reagents, which after fragmentation in the MS/MS analysis provide a final detectable mass shift, can be used to tag both analyte and standard samples; (iv) label-free approaches in which the absolute quantitative data are not obtained through the use of any kind of labeling, but from computational normalization of the raw data and adequate standards; (v) elemental mass spectrometry-based workflows able to provide directly absolute quantification of peptides/proteins that contain an ICP-detectable element. A critical insight from the Analytical Chemistry perspective of the different standardization approaches and their combinations used so far for absolute quantitative MS-based (molecular and

  16. Towards absolute laser spectroscopic CO2 isotope ratio measurements

    NASA Astrophysics Data System (ADS)

    Anyangwe Nwaboh, Javis; Werhahn, Olav; Ebert, Volker

    2017-04-01

    Knowledge of isotope composition of carbon dioxide (CO2) in the atmosphere is necessary to identify sources and sinks of this key greenhouse gas. In the last years, laser spectroscopic techniques such as cavity ring-down spectroscopy (CRDS) and tunable diode laser absorption spectroscopy (TDLAS) have been shown to perform accurate isotope ratio measurements for CO2 and other gases like water vapour (H2O) [1,2]. Typically, isotope ratios are reported in literature referring to reference materials provided by e.g. the International Atomic Energy Agency (IAEA). However, there could be some benefit if field deployable absolute isotope ratio measurement methods were developed to address issues such as exhausted reference material like the Pee Dee Belemnite (PDB) standard. Absolute isotope ratio measurements would be particularly important for situations where reference materials do not even exist. Here, we present CRDS and TDLAS-based absolute isotope ratios (13C/12C ) in atmospheric CO2. We demonstrate the capabilities of the used methods by measuring CO2 isotope ratios in gas standards. We compare our results to values reported for the isotope certified gas standards. Guide to the expression of uncertainty in measurement (GUM) compliant uncertainty budgets on the CRDS and TDLAS absolute isotope ratio measurements are presented, and traceability is addressed. We outline the current impediments in realizing high accuracy absolute isotope ratio measurements using laser spectroscopic methods, propose solutions and the way forward. Acknowledgement Parts of this work have been carried out within the European Metrology Research Programme (EMRP) ENV52 project-HIGHGAS. The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union. References [1] B. Kühnreich, S. Wagner, J. C. Habig,·O. Möhler, H. Saathoff, V. Ebert, Appl. Phys. B 119:177-187 (2015). [2] E. Kerstel, L. Gianfrani, Appl. Phys. B 92, 439-449 (2008).

  17. Determination of the anaerobic threshold in the pre-operative assessment clinic: inter-observer measurement error.

    PubMed

    Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M

    2009-11-01

    The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.

  18. Effect of Absolute From Hibiscus syriacus L. Flower on Wound Healing in Keratinocytes

    PubMed Central

    Yoon, Seok Won; Lee, Kang Pa; Kim, Do-Yoon; Hwang, Dae Il; Won, Kyung-Jong; Lee, Dae Won; Lee, Hwan Myung

    2017-01-01

    Background: Proliferation and migration of keratinocytes are essential for the repair of cutaneous wounds. Hibiscus syriacus L. has been used in Asian medicine; however, research on keratinocytes is inadequate. Objective: To establish the dermatological properties of absolute from Hibiscus syriacus L. flower (HSF) and to provide fundamental research for alternative medicine. Materials and Methods: We identified the composition of HSF absolute using gas chromatography-mass spectrometry analysis. We also examined the effect of HSF absolute in HaCaT cells using the XTT assay, Boyden chamber assay, sprout-out growth assay, and western blotting. We conducted an in-vivo wound healing assay in rat tail-skin. Results: Ten major active compounds were identified from HSF absolute. As determined by the XTT assay, Boyden chamber assay, and sprout-out growth assay results, HSF absolute exhibited similar effects as that of epidermal growth factor on the proliferation and migration patterns of keratinocytes (HaCaT cells), which were significantly increased after HSF absolute treatment. The expression levels of the phosphorylated signaling proteins relevant to proliferation, including extracellular signal-regulated kinase 1/2 (Erk 1/2) and Akt, were also determined by western blot analysis. Conclusion: These results of our in-vitro and ex-vivo studies indicate that HSF absolute induced cell growth and migration of HaCaT cells by phosphorylating both Erk 1/2 and Akt. Moreover, we confirmed the wound-healing effect of HSF on injury of the rat tail-skin. Therefore, our results suggest that HSF absolute is promising for use in cosmetics and alternative medicine. SUMMARY Hisbiscus syriacus L. flower absolute increases HaCaT cell migration and proliferation.Hisbiscus syriacus L. flower absolute regulates phosphorylation of ERK 1/2 and Akt in HaCaT cell.Treatment with Hisbiscus syriacus L. flower induced sprout outgrowth.The wound in the tail-skin of rat was reduced by Hisbiscus syriacus

  19. Effect of Absolute From Hibiscus syriacus L. Flower on Wound Healing in Keratinocytes.

    PubMed

    Yoon, Seok Won; Lee, Kang Pa; Kim, Do-Yoon; Hwang, Dae Il; Won, Kyung-Jong; Lee, Dae Won; Lee, Hwan Myung

    2017-01-01

    Proliferation and migration of keratinocytes are essential for the repair of cutaneous wounds. Hibiscus syriacus L. has been used in Asian medicine; however, research on keratinocytes is inadequate. To establish the dermatological properties of absolute from Hibiscus syriacus L. flower (HSF) and to provide fundamental research for alternative medicine. We identified the composition of HSF absolute using gas chromatography-mass spectrometry analysis. We also examined the effect of HSF absolute in HaCaT cells using the XTT assay, Boyden chamber assay, sprout-out growth assay, and western blotting. We conducted an in-vivo wound healing assay in rat tail-skin. Ten major active compounds were identified from HSF absolute. As determined by the XTT assay, Boyden chamber assay, and sprout-out growth assay results, HSF absolute exhibited similar effects as that of epidermal growth factor on the proliferation and migration patterns of keratinocytes (HaCaT cells), which were significantly increased after HSF absolute treatment. The expression levels of the phosphorylated signaling proteins relevant to proliferation, including extracellular signal-regulated kinase 1/2 (Erk 1/2) and Akt, were also determined by western blot analysis. These results of our in-vitro and ex-vivo studies indicate that HSF absolute induced cell growth and migration of HaCaT cells by phosphorylating both Erk 1/2 and Akt. Moreover, we confirmed the wound-healing effect of HSF on injury of the rat tail-skin. Therefore, our results suggest that HSF absolute is promising for use in cosmetics and alternative medicine. Hisbiscus syriacus L. flower absolute increases HaCaT cell migration and proliferation. Hisbiscus syriacus L. flower absolute regulates phosphorylation of ERK 1/2 and Akt in HaCaT cell.Treatment with Hisbiscus syriacus L. flower induced sprout outgrowth.The wound in the tail-skin of rat was reduced by Hisbiscus syriacus L. flower absolute Abbreviations used: HSF: Hibiscus syriacus L. flower

  20. 7 CFR 987.44 - Free and restricted percentages.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... restricted percentages. (a) Whenever the Committee finds that the available supply of marketable dates of applicable grade and size available to supply the trade demand for free dates of any variety is likely to be...