Science.gov

Sample records for accuracy precision reproducibility

  1. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  2. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  3. A precision translation stage for reproducing measured target volume motions.

    PubMed

    Litzenberg, Dale W; Hadley, Scott W; Lam, Kwok L; Balter, James M

    2007-01-01

    The development of 4D imaging, treatment planning and treatment delivery methods for radiation therapy require the use of a high-precision translation stage for testing and validation. These technologies may require spatial resolutions of 1 mm, and temporal resolutions of 2-30 Hz for CT imaging, electromagnetic tracking, and fluoroscopic imaging. A 1D programmable translation stage capable of reproducing idealized and measured anatomic motions common to the thorax has been design and built to meet these spatial and temporal resolution requirement with phantoms weighing up to 27 kg. The stage consists of a polycarbonate base and table, driven by an AC servo motor with encoder feedback by means of a belt-coupled precision screw. Complex motions are possible through a programmable motion controller that is capable of running multiple independent control and monitoring programs concurrently. Programmable input and output ports allow motion to be synchronized with beam delivery and other imaging and treatment delivery devices to within 2.0 ms. Average deviations from the programmed positions are typically 0.2 mm or less, while the average typical maximum positional errors are typically 0.5 mm for an indefinite number of idealized breathing motion cycles and while reproducing measured target volume motions for several minutes. PMID:17712294

  4. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  5. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  6. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  7. T1-mapping in the heart: accuracy and precision

    PubMed Central

    2014-01-01

    The longitudinal relaxation time constant (T1) of the myocardium is altered in various disease states due to increased water content or other changes to the local molecular environment. Changes in both native T1 and T1 following administration of gadolinium (Gd) based contrast agents are considered important biomarkers and multiple methods have been suggested for quantifying myocardial T1 in vivo. Characterization of the native T1 of myocardial tissue may be used to detect and assess various cardiomyopathies while measurement of T1 with extracellular Gd based contrast agents provides additional information about the extracellular volume (ECV) fraction. The latter is particularly valuable for more diffuse diseases that are more challenging to detect using conventional late gadolinium enhancement (LGE). Both T1 and ECV measures have been shown to have important prognostic significance. T1-mapping has the potential to detect and quantify diffuse fibrosis at an early stage provided that the measurements have adequate reproducibility. Inversion recovery methods such as MOLLI have excellent precision and are highly reproducible when using tightly controlled protocols. The MOLLI method is widely available and is relatively mature. The accuracy of inversion recovery techniques is affected significantly by magnetization transfer (MT). Despite this, the estimate of apparent T1 using inversion recovery is a sensitive measure, which has been demonstrated to be a useful tool in characterizing tissue and discriminating disease. Saturation recovery methods have the potential to provide a more accurate measurement of T1 that is less sensitive to MT as well as other factors. Saturation recovery techniques are, however, noisier and somewhat more artifact prone and have not demonstrated the same level of reproducibility at this point in time. This review article focuses on the technical aspects of key T1-mapping methods and imaging protocols and describes their limitations including

  8. Precision and accuracy in the reproduction of simple tone sequences.

    PubMed

    Vos, P G; Ellermann, H H

    1989-02-01

    In four experiments we investigated the precision and accuracy with which amateur musicians are able to reproduce sequences of tones varied only temporally, so as to have tone and rest durations constant over sequences, and the tempo varied over the musically meaningful range of 5-0.5 tones per second. Experiments 1 and 2 supported the hypothesis of attentional bias toward having the attack moments, rather than the departure moments, precisely times. Experiment 3 corroborated the hypothesis that inaccurate timing of short interattack intervals is manifested in a lengthening of rests, rather than tones, as a result of larger motor activity during the reproduction of rests. Experiment 4 gave some support to the hypothesis that the shortening of long interattack intervals is due to mnemonic constraints affecting the rests rather than the tones. Both theoretical and practical consequences of the various findings, particularly with respect to timing in musical performance, are discussed. PMID:2522528

  9. Accuracy and reproducibility of bending stiffness measurements by mechanical response tissue analysis in artificial human ulnas.

    PubMed

    Arnold, Patricia A; Ellerbrock, Emily R; Bowman, Lyn; Loucks, Anne B

    2014-11-01

    Osteoporosis is characterized by reduced bone strength, but no FDA-approved medical device measures bone strength. Bone strength is strongly associated with bone stiffness, but no FDA-approved medical device measures bone stiffness either. Mechanical Response Tissue Analysis (MRTA) is a non-significant risk, non-invasive, radiation-free, vibration analysis technique for making immediate, direct functional measurements of the bending stiffness of long bones in humans in vivo. MRTA has been used for research purposes for more than 20 years, but little has been published about its accuracy. To begin to investigate its accuracy, we compared MRTA measurements of bending stiffness in 39 artificial human ulna bones to measurements made by Quasistatic Mechanical Testing (QMT). In the process, we also quantified the reproducibility (i.e., precision and repeatability) of both methods. MRTA precision (1.0±1.0%) and repeatability (3.1 ± 3.1%) were not as high as those of QMT (0.2 ± 0.2% and 1.3+1.7%, respectively; both p<10(-4)). The relationship between MRTA and QMT measurements of ulna bending stiffness was indistinguishable from the identity line (p=0.44) and paired measurements by the two methods agreed within a 95% confidence interval of ± 5%. If such accuracy can be achieved on real human ulnas in situ, and if the ulna is representative of the appendicular skeleton, MRTA may prove clinically useful. PMID:25261885

  10. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  11. Accuracy and reproducibility of tumor positioning during prolonged and multi-modality animal imaging studies

    NASA Astrophysics Data System (ADS)

    Zhang, Mutian; Huang, Minming; Le, Carl; Zanzonico, Pat B.; Claus, Filip; Kolbert, Katherine S.; Martin, Kyle; Ling, C. Clifton; Koutcher, Jason A.; Humm, John L.

    2008-10-01

    Dedicated small-animal imaging devices, e.g. positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI) scanners, are being increasingly used for translational molecular imaging studies. The objective of this work was to determine the positional accuracy and precision with which tumors in situ can be reliably and reproducibly imaged on dedicated small-animal imaging equipment. We designed, fabricated and tested a custom rodent cradle with a stereotactic template to facilitate registration among image sets. To quantify tumor motion during our small-animal imaging protocols, 'gold standard' multi-modality point markers were inserted into tumor masses on the hind limbs of rats. Three types of imaging examination were then performed with the animals continuously anesthetized and immobilized: (i) consecutive microPET and MR images of tumor xenografts in which the animals remained in the same scanner for 2 h duration, (ii) multi-modality imaging studies in which the animals were transported between distant imaging devices and (iii) serial microPET scans in which the animals were repositioned in the same scanner for subsequent images. Our results showed that the animal tumor moved by less than 0.2-0.3 mm over a continuous 2 h microPET or MR imaging session. The process of transporting the animal between instruments introduced additional errors of ~0.2 mm. In serial animal imaging studies, the positioning reproducibility within ~0.8 mm could be obtained.

  12. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    SciTech Connect

    Cleveland, Mathew A. Brunner, Thomas A.; Gentile, Nicholas A.; Keasler, Jeffrey A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositions will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.

  13. Precision and Accuracy Studies with Kajaani Fiber Length Analyzers

    NASA Astrophysics Data System (ADS)

    Copur, Yalcin; Makkonen, Hannu

    The aim of this study was to test the measurement precision and accuracy of the Kajaani FS-100 giving attention to possible machine error in the measurements. Fiber length of pine pulps produced using polysulfide, kraft, biokraft and soda methods were determined using both FS-100 and FiberLab automated fiber length analyzers. The measured length values were compared for both methods. The measurement precision and accuracy was tested by replicated measurements using rayon stable fibers. Measurements performed on pulp samples showed typical length distributions for both analyzers. Results obtained from Kajaani FS-100 and FiberLab showed a significant correlation. The shorter length measurement with FiberLab was found to be mainly due to the instrument calibration. The measurement repeatability tested for Kajaani FS-100 indicated that the measurements are precise.

  14. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    NASA Astrophysics Data System (ADS)

    Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.

    2016-04-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.

  15. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  16. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  17. Accuracy, Precision, and Resolution in Strain Measurements on Diffraction Instruments

    NASA Astrophysics Data System (ADS)

    Polvino, Sean M.

    Diffraction stress analysis is a commonly used technique to evaluate the properties and performance of different classes of materials from engineering materials, such as steels and alloys, to electronic materials like Silicon chips. Often to better understand the performance of these materials at operating conditions they are also commonly subjected to elevated temperatures and different loading conditions. The validity of any measurement under these conditions is only as good as the control of the conditions and the accuracy and precision of the instrument being used to measure the properties. What is the accuracy and precision of a typical diffraction system and what is the best way to evaluate these quantities? Is there a way to remove systematic and random errors in the data that are due to problems with the control system used? With the advent of device engineering employing internal stress as a method for increasing performance the measurement of stress from microelectronic structures has become of enhanced importance. X-ray diffraction provides an ideal method for measuring these small areas without the need for modifying the sample and possibly changing the strain state. Micro and nano diffraction experiments on Silicon-on-Insulator samples revealed changes to the material under investigation and raised significant concerns about the usefulness of these techniques. This damage process and the application of micro and nano diffraction is discussed.

  18. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  19. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  20. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS--1988

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies (SLAMS) during 1988 are analyzed. ooled site variances and average biases which are relevant quantities to both precision and accuracy determinations are statistically compared within and between states to assess ...

  1. Influence of electrode positioning on accuracy and reproducibility of electrical velocimetry cardiac output measurements.

    PubMed

    Trinkmann, Frederik; Berger, Manuel; Michels, Julia D; Doesch, Christina; Weiss, Christel; Schoenberg, Stefan O; Akin, Ibrahim; Borggrefe, Martin; Papavassiliu, Theano; Saur, Joachim

    2016-09-01

    Electrical velocimetry (EV) is one of the most recent adaptions of impedance cardiography. Previous studies yielded diverging results identifying several factors negatively influencing accuracy. Although electrode arrangement is suspected to be an influencing factor for impedance cardiography in general, no data for EV is available. We aimed to prospectively assess the influence of electrode position on the accuracy and reproducibility of cardiac output (CO) measurements obtained by EV. Two pairs of standard electrocardiographic electrodes were placed at predefined positions of the thorax in 81 patients. The inter-electrode gap was varied between either 5 or 15 cm by caudal movement of the lowest electrode. Measurements were averaged over 20 s and performed twice at each electrode position. Reference values were determined using cardiac magnetic resonance imaging (CMR). Mean bias was 1.2  ±  1.6 l min(-1) (percentage error 22  ±  28%) between COCMR and COEV at the 5 cm gap significantly improving to 0.5  ±  1.6 l min(-1) (8  ±  28%) when increasing the gap (p  <  0.0001). The mean difference between repeated measurements was 0.0  ±  0.3 l min(-1) for the 5 cm and 0.1  ±  0.3 l min(-1) for the 15 cm gap, respectively (p  =  0.3). The accuracy of EV can be significantly improved when increasing the lower inter-electrode gap still exceeding the Critchley and Critchley recommendations. Therefore, absolute values should not be used interchangeably in clinical routine. As the reproducibility was not negatively affected, serial hemodynamic measurements can be reliably acquired in stable patients when the electrode position remains unchanged. PMID:27480359

  2. High-Reproducibility and High-Accuracy Method for Automated Topic Classification

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes

    2015-01-01

    Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.

  3. Accuracy and precision of alternative estimators of ectoparasiticide efficacy.

    PubMed

    Schall, Robert; Burger, Divan A; Luus, Herman G

    2016-06-15

    While there is consensus that the efficacy of parasiticides is properly assessed using the Abbott formula, there is as yet no general consensus on the use of arithmetic versus geometric mean numbers of surviving parasites in the formula. The purpose of this paper is to investigate the accuracy and precision of various efficacy estimators based on the Abbott formula which alternatively use arithmetic mean, geometric mean and median numbers of surviving parasites; we also consider a maximum likelihood estimator. Our study shows that the best estimators using geometric means are competitive, with respect to root mean squared error, with the conventional Abbott estimator using arithmetic means, as they have lower average and lower median root mean square error over the parameter scenarios which we investigated. However, our study confirms that Abbott estimators using geometric means are potentially biased upwards, and this upward bias is substantial in particular when the test product has substandard efficacy (90% and below). For this reason, we recommend that the Abbott estimator be calculated using arithmetic means. PMID:27198777

  4. Study of highly precise outdoor characterization technique for photovoltaic modules in terms of reproducibility

    NASA Astrophysics Data System (ADS)

    Fukabori, Akihiro; Takenouchi, Takakazu; Matsuda, Youji; Tsuno, Yuki; Hishikawa, Yoshihiro

    2015-08-01

    In this study, novel outdoor measurements were conducted for highly precise characterization of photovoltaic (PV) modules by measuring current-voltage (I-V) curves with fast sweep speeds and module’s temperature, and with a PV sensor for reference. Fast sweep speeds suppressed the irradiance variation. As a result, smooth I-V curves were obtained and the PV parameter deviation was suppressed. The module’s temperature was measured by attaching resistive temperature detector sensors on the module’s backsheet. The PV sensor was measured synchronously with the PV module. The PV parameters including Isc, Pmax, Voc, and FF were estimated after correcting the I-V curves using the IEC standards. The reproducibility of Isc, Pmax, Voc, and FF relative to the outdoor fits was evaluated as 0.43, 0.58, 0.24, and 0.23%, respectively. The results demonstrate that highly precise measurements are possible using a PV measurement system with the three above-mentioned features.

  5. A comprehensive investigation of the accuracy and reproducibility of a multitarget single isocenter VMAT radiosurgery technique

    SciTech Connect

    Thomas, Andrew; Niebanck, Michael; Juang, Titania; Wang, Zhiheng; Oldham, Mark

    2013-12-15

    matched the treatment plan, demonstrating high accuracy and reproducibility of both the treatment machine and the IGRT procedure. The complexity of the treatment (multiple arcs) and dosimetry (multiple strong gradients) pose a substantial challenge for comprehensive verification. 3D dosimetry can be uniquely effective in this scenario.

  6. A comprehensive investigation of the accuracy and reproducibility of a multitarget single isocenter VMAT radiosurgery technique

    PubMed Central

    Thomas, Andrew; Niebanck, Michael; Juang, Titania; Wang, Zhiheng; Oldham, Mark

    2013-01-01

    treatment plan, demonstrating high accuracy and reproducibility of both the treatment machine and the IGRT procedure. The complexity of the treatment (multiple arcs) and dosimetry (multiple strong gradients) pose a substantial challenge for comprehensive verification. 3D dosimetry can be uniquely effective in this scenario. PMID:24320511

  7. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  8. Accuracy, reproducibility, and interpretation of fatty acid methyl ester profiles of model bacterial communities

    USGS Publications Warehouse

    Kidd, Haack S.; Garchow, H.; Odelson, D.A.; Forney, L.J.; Klug, M.J.

    1994-01-01

    We determined the accuracy and reproducibility of whole-community fatty acid methyl ester (FAME) analysis with two model bacterial communities differing in composition by using the Microbial ID, Inc. (MIDI), system. The biomass, taxonomic structure, and expected MIDI-FAME profiles under a variety of environmental conditions were known for these model communities a priori. Not all members of each community could be detected in the composite profile because of lack of fatty acid 'signatures' in some isolates or because of variations (approximately fivefold) in fatty acid yield across taxa. MIDI- FAME profiles of replicate subsamples of a given community were similar in terms of fatty acid yield per unit of community dry weight and relative proportions of specific fatty acids. Principal-components analysis (PCA) of MIDI-FAME profiles resulted in a clear separation of the two different communities and a clustering of replicates of each community from two separate experiments on the first PCA axis. The first PCA axis accounted for 57.1% of the variance in the data and was correlated with fatty acids that varied significantly between communities and reflected the underlying community taxonomic structure. On the basis of our data, community fatty acid profiles can be used to assess the relative similarities and differences of microbial communities that differ in taxonomic composition. However, detailed interpretation of community fatty acid profiles in terms of biomass or community taxonomic composition must be viewed with caution until our knowledge of the quantitative and qualitative distribution of fatty acids over a wide variety of taxa and the effects of growth conditions on fatty acid profiles is more extensive.

  9. Numerical planetary and lunar ephemerides - Present status, precision and accuracies

    NASA Technical Reports Server (NTRS)

    Standish, E. Myles, Jr.

    1986-01-01

    Features of the emphemeris creation process are described with attention given to the equations of motion, the numerical integration, and the least-squares fitting process. Observational data are presented and ephemeride accuracies are estimated. It is believed that radio measurements, VLBI, occultations, and the Space Telescope and Hipparcos will improve ephemerides in the near future. Limitations to accuracy are considered as well as relativity features. The export procedure, by which an outside user may obtain and use the JPL ephemerides, is discussed.

  10. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  11. The precision and accuracy of a portable heart rate monitor.

    PubMed

    Seaward, B L; Sleamaker, R H; McAuliffe, T; Clapp, J F

    1990-01-01

    A device that would comfortably and accurately measure exercise heart rate during field performance could be valuable for athletes, fitness participants, and investigators in the field of exercise physiology. Such a device, a portable telemeterized microprocessor, was compared with direct EKG measurements in a laboratory setting under several conditions to assess its accuracy. Twenty-four subjects were studied at rest and during light-, moderate-, high-, and maximal-intensity endurance activities (walking, running, aerobic dancing, and Nordic Track simulated cross-country skiing. Differences between values obtained by the two measuring devices were not statistically significant, with correlation coefficient (r) values ranging from 0.998 to 0.999. The two methods proved equally reliable for measuring heart rate in a host of varied aerobic activities at varying intensities. PMID:2306564

  12. Milling precision and fitting accuracy of Cerec Scan milled restorations.

    PubMed

    Arnetzl, G; Pongratz, D

    2005-10-01

    The milling accuracy of the Cerec Scan system was examined under standard practice conditions. For this purpose, one and the same 3D design similar to an inlay was milled 30 times from Vita Mark II ceramic blocks. Cylindrical diamond burs with 1.2 or 1.6 mm diameter were used. Each individual milled body was measured exactly to 0.1 microm at five defined sections with a coordinate measuring instrument from the Zeiss company. In the statistical evaluation, both the different diamond bur diameters and the extent of material removal from the ceramic blank were taken into consideration; sections with large substance removal and sections with low substance removal were defined. The standard deviation for the 1.6-mm burs was clearly greater than that for the 1.2-mm burs for the section with large substance removal. This difference was significant according to the Levene test for variance equality. In sections with low substance removal, no difference between the use of the 1.6-mm or 1.2-mm bur was shown. The measuring results ranged between 0.053 and 0.14 mm. The spacing of the distances with large substance removal were larger than those with low substance removal. The T-test for paired random samples showed that the distance with large substance removal when using the 1.6-mm bur was significantly larger than the distance with low substance removal. The difference was not significant for the small burs. It was shown several times statistically that the use of the cylindrical diamond bur with 1.6-mm diameter led to greater inaccuracies than the use of the 1.2-mm cylindrical diamond bur, especially at sites with large material removal. PMID:16689028

  13. Accuracy of femoral templating in reproducing anatomical femoral offset in total hip replacement.

    PubMed

    Davies, H; Foote, J; Spencer, R F

    2007-01-01

    Restoration of hip biomechanics is a crucial component of successful total hip replacement. Preoperative templating is recommended to ensure that the size and orientation of implants is optimised. We studied how closely natural femoral offset could be reproduced using the manufacturers' templates for 10 femoral stems in common use in the UK. A series of 23 consecutive preoperative radiographs from patients who had undergone unilateral total hip replacement for unilateral osteoarthritis of the hip was employed. The change in offset between the templated position of the best-fitting template and the anatomical centre of the hip was measured. The templates were then ranked according to their ability to reproduce the normal anatomical offset. The most accurate was the CPS-Plus (Root Mean Square Error 2.0 mm) followed in rank order by: C stem (2.16), CPT (2.40), Exeter (3.23), Stanmore (3.28), Charnley (3.65), Corail (3.72), ABG II (4.30), Furlong HAC (5.08) and Furlong modular (7.14). A similar pattern of results was achieved when the standard error of variability of offset was analysed. We observed a wide variation in the ability of the femoral prosthesis templates to reproduce normal femoral offset. This variation was independent of the seniority of the observer. The templates of modern polished tapered stems with high modularity were best able to reproduce femoral offset. The current move towards digitisation of X-rays may offer manufacturers an opportunity to improve template designs in certain instances, and to develop appropriate computer software. PMID:19197861

  14. Analysis of factors affecting the accuracy, reproducibility, and interpretation of microbial community carbon source utilization patterns

    USGS Publications Warehouse

    Haack, S.K.; Garchow, H.; Klug, M.J.; Forney, L.J.

    1995-01-01

    We determined factors that affect responses of bacterial isolates and model bacterial communities to the 95 carbon substrates in Biolog microliter plates. For isolates and communities of three to six bacterial strains, substrate oxidation rates were typically nonlinear and were delayed by dilution of the inoculum. When inoculum density was controlled, patterns of positive and negative responses exhibited by microbial communities to each of the carbon sources were reproducible. Rates and extents of substrate oxidation by the communities were also reproducible but were not simply the sum of those exhibited by community members when tested separately. Replicates of the same model community clustered when analyzed by principal- components analysis (PCA), and model communities with different compositions were clearly separated un the first PCA axis, which accounted for >60% of the dataset variation. PCA discrimination among different model communities depended on the extent to which specific substrates were oxidized. However, the substrates interpreted by PCA to be most significant in distinguishing the communities changed with reading time, reflecting the nonlinearity of substrate oxidation rates. Although whole-community substrate utilization profiles were reproducible signatures for a given community, the extent of oxidation of specific substrates and the numbers or activities of microorganisms using those substrates in a given community were not correlated. Replicate soil samples varied significantly in the rate and extent of oxidation of seven tested substrates, suggesting microscale heterogeneity in composition of the soil microbial community.

  15. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  16. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  17. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  18. Factors influencing accuracy and reproducibility of body resistance measurements by foot-to-foot impedancemeters.

    PubMed

    Bousbiat, Sana; Jaffrin, Michel; Assadi, Imen

    2015-01-01

    The electronics of a BodySignal V2 (Tefal, France) foot-to-foot impedancemeter (FFI) was modified to display the foot-to-foot resistance instead of body fat. This device was connected to electrodes of different sizes mounted on a podoscope permitting photographs of subjects feet soles and electrodes in order to calculate the contact area between feet and electrodes. The foot-to-foot resistance was found to decrease when the contact area of feet with current and voltage electrodes increased. It was also sensitive to feet displacement and a backward move of 5 cm increased the mean resistance by 37 Ω. The resistance reproducibility was tested by asking the subject to repeat measurements 10-times by stepping up and down from the podoscope. The mean SD of these tests was 0.88% of mean resistance, but it fell to 0.47% when feet position was guided and to 0.29% with transverse voltage electrodes. For good reproducibility, it is important that voltage electrodes be small and that the scale design facilitates a correct position of heels on these electrodes. PMID:25365933

  19. S193 radiometer brightness temperature precision/accuracy for SL2 and SL3

    NASA Technical Reports Server (NTRS)

    Pounds, D. J.; Krishen, K.

    1975-01-01

    The precision and accuracy with which the S193 radiometer measured the brightness temperature of ground scenes is investigated. Estimates were derived from data collected during Skylab missions. Homogeneous ground sites were selected and S193 radiometer brightness temperature data analyzed. The precision was expressed as the standard deviation of the radiometer acquired brightness temperature. Precision was determined to be 2.40 K or better depending on mode and target temperature.

  20. Failure of the Woods-Saxon nuclear potential to simultaneously reproduce precise fusion and elastic scattering measurements

    SciTech Connect

    Mukherjee, A.; Hinde, D. J.; Dasgupta, M.; Newton, J. O.; Butt, R. D.; Hagino, K.

    2007-04-15

    A precise fusion excitation function has been measured for the {sup 12}C+{sup 208}Pb reaction at energies around the barrier, allowing the fusion barrier distribution to be extracted. The fusion cross sections at high energies differ significantly from existing fusion data. Coupled reaction channels calculations have been carried out with the code FRESCO. A bare potential previously claimed to uniquely describe a wide range of {sup 12}C+{sup 208}Pb near-barrier reaction channels failed to reproduce the new fusion data. The nuclear potential diffuseness of 0.95 fm which fits the fusion excitation function over a broad energy range fails to reproduce the elastic scattering. A diffuseness of 0.55 fm reproduces the fusion barrier distribution and elastic scattering data, but significantly overpredicts the fusion cross sections at high energies. This may be due to physical processes not included in the calculations. To constrain calculations, it is desirable to have precisely measured fusion cross sections, especially at energies around the barrier.

  1. Craniofacial skeletal measurements based on computed tomography: Part I. Accuracy and reproducibility.

    PubMed

    Waitzman, A A; Posnick, J C; Armstrong, D C; Pron, G E

    1992-03-01

    Computed tomography (CT) is a useful modality for the management of craniofacial anomalies. A study was undertaken to assess whether CT measurements of the upper craniofacial skeleton accurately represent the bony region imaged. Measurements taken directly from five dry skulls (approximate ages: adults, over 18 years; child, 4 years; infant, 6 months) were compared to those from axial CT scans of these skulls. Excellent agreement was found between the direct (dry skull) and indirect (CT) measurements. The effect of head tilt on the accuracy of these measurements was investigated. The error was within clinically acceptable limits (less than 5 percent) if the angle was no more than +/- 4 degrees from baseline (0 degrees). Objective standardized information gained from CT should complement the subjective clinical data usually collected for the treatment of craniofacial deformities. PMID:1571344

  2. Accuracy and reproducibility of a novel dynamic volume flow measurement method.

    PubMed

    Ricci, Stefano; Cinthio, Magnus; Ahlgren, Asa Rydén; Tortoli, Piero

    2013-10-01

    In clinical practice, blood volume flow (BVF) is typically calculated assuming a perfect parabolic and axisymmetric velocity distribution. This simple approach cannot account for the complex flow configurations that are produced by vessel curvatures, pulsatility and diameter changes and, therefore, results in a poor estimation. Application of the Womersley model allows compensation for the flow distortion caused by pulsatility and, with some adjustment, the effects of slight curvatures, but several problems remain unanswered. Two- and three-dimensional approaches can acquire the actual velocity field over the whole vessel section, but are typically affected by a limited temporal resolution. The multigate technique allows acquisition of the actual velocity profile over a line intersecting the vessel lumen and, when coupled with a suitable wall-tracking method, can offer the ideal trade-off among attainable accuracy, temporal resolution and required calculation power. In this article, we describe a BVF measurement method based on the multigate spectral Doppler and a B-mode edge detector algorithm for wall-position tracking. The method has been extensively tested on the research platform ULA-OP, with more than 1700 phantom measurements at flow rates between 60 and 750 mL/min, steering angles between 10 ° and 22 ° and constant, sinusoidal or pulsed flow trends. In the averaged BVF measurement, we found an underestimation of about -5% and a coefficient of variability (CV) less than 6%. In instantaneous measurements (e.g., systolic peak) the CV was in the range 2%-8.5%. These results were confirmed by a preliminary test on the common carotid artery of 10 volunteers (CV = 2%-11%). PMID:23849385

  3. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations. PMID:15832575

  4. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  5. "High-precision, reconstructed 3D model" of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data.

    PubMed

    Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi

    2016-01-01

    Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans. PMID:26832374

  6. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  7. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1984

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies during 1984 are summarized and compared to data reported earlier for the period 1981-1983. A continual improvement in the completeness of the data is evident. Improvement is also evident in the size of the precisi...

  8. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1983

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1983 are summarized and evaluated. Some comparisons are made with the results previously reported for 1981 and 1982 to determine the indication of any trends. Some trends indicated improvement in the comple...

  9. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1985

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1985 are summarized and evaluated. Some comparisons are made with the results reported for prior years to determine any trends. Some trends indicated continued improvement in the completeness of reporting o...

  10. ASSESSMENT OF THE PRECISION AND ACCURACY OF SAM AND MFC MICROCOSMS EXPOSED TO TOXICANTS

    EPA Science Inventory

    The results of 30 mixed flank culture (MFC) and four standardized aquatic microcosm (SAM) microcosm experiments were used to describe the precision and accuracy of these two protocols. oefficients of variation (CV) for chemicals measurements (DO,pH) were generally less than 7%, f...

  11. Reproducibility and Accuracy of Quantitative Myocardial Blood Flow Using 82Rb-PET: Comparison with 13N-Ammonia

    PubMed Central

    Fakhri, Georges El

    2011-01-01

    82Rb cardiac PET allows the assessment of myocardial perfusion using a column generator in clinics that lack a cyclotron. We and others have previously shown that quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) is feasible using dynamic 82Rb PET and factor and compartment analyses. The aim of the present work was to determine the intra- and inter-observer variability of MBF estimation using 82Rb PET as well as the reproducibility of our generalized factor + compartment analyses methodology to estimate MBF and assess its accuracy by comparing, in the same subjects, 82Rb estimates of MBF to those obtained using 13N-ammonia. Methods Twenty-two subjects were included in the reproducibility and twenty subjects in the validation study. Patients were injected with 60±5mCi of 82Rb and imaged dynamically for 6 minutes at rest and during dipyridamole stress Left and right ventricular (LV+RV) time-activity curves were estimated by GFADS and used as input to a 2-compartment kinetic analysis that estimates parametric maps of myocardial tissue extraction (K1) and egress (k2), as well as LV+RV contributions (fv,rv). Results Our results show excellent reproducibility of the quantitative dynamic approach itself with coefficients of repeatability of 1.7% for estimation of MBF at rest, 1.4% for MBF at peak stress and 2.8% for CFR estimation. The inter-observer reproducibility between the four observers that participated in this study was also very good with correlation coefficients greater than 0.87 between any two given observers when estimating coronary flow reserve. The reproducibility of MBF in repeated 82Rb studies was good at rest and excellent at peak stress (r2=0.835). Furthermore, the slope of the correlation line was very close to 1 when estimating stress MBF and CFR in repeated 82Rb studies. The correlation between myocardial flow estimates obtained at rest and during peak stress in 82Rb and 13N-ammonia studies was very good at rest (r2

  12. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  13. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System

    PubMed Central

    Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.

    2015-01-01

    Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence

  14. Accuracy, precision and economics: The cost of cutting-edge chemical analyses

    NASA Astrophysics Data System (ADS)

    Hamilton, B.; Hannigan, R.; Jones, C.; Chen, Z.

    2002-12-01

    Otolith (fish ear bone) chemistry has proven to be an exceptional tool for the identification of essential fish habitats in marine and freshwater environments. These measurements, which explore the variations in trace element content of otoliths relative to Calcium (eg., Ba/Ca, Mg/Ca etc.), provide data to resolve the differences in habitat water chemistry on the watershed to catchment scale. The vast majority of these analyses are performed by laser ablation ICP-MS using a high-resolution instrument. However, few laboratories are equipped with this configuration and many researchers measure the trace element chemistry of otoliths by whole digestion ICP-MS using lower resolution quadrupole instruments. This study examines the differences in accuracy and precision between three elemental analysis methods using whole otolith digestion on a low resolution ICP-MS (ELAN 9000). The first, and most commonly used, technique is external calibration with internal standardization. This technique is the most cost-effective but also is one with limitations in terms of method detection. The second, standard addition is more costly in terms of time and use of standard materials but offers gains in precision and accuracy. The third, isotope dilution, is the least cost effective but the most accurate of elemental analysis techniques. Based on the results of this study, which seeks to identify the technique which is the easiest to implement yet has the precision and accuracy necessary to resolve spatial variations in habitats, we conclude that external calibration with internal standardization can be sufficient to revolve spatial and temporal variations in marine and estuarine environments (+/- 6-8% accuracy). Standard addition increases the accuracy of measurements to 2-5% and is ideal for freshwater studies. While there is a gain in accuracy and precision with isotope dilution, the spatial and temporal resolution is no greater with this technique than the other.

  15. Accuracy and reproducibility of the Oxi/Ferm system in identifying a select group of unusual gram-negative bacilli.

    PubMed Central

    Nadler, H; George, H; Barr, J

    1979-01-01

    The Oxi/Ferm (O/F) identification system was compared in a double-blind study to a conventional test battery for the characterization of 96 reference and clinical strains consisting of 83 nonfermentative and 13 oxidase-producing, fermentative gram-negative bacilli. The O/F tube and supplemental tests correctly identified 84% of the nonfermentative and 77% of the oxidase-producing, fermentative bacilli. However, when the supplemental tests were excluded and the biochemical profiles generated by all nine O/F tube reactions were examined, the profile accuracy reached 95% (79 of 83) for the nonfermentative and 93% (12 of 13) for oxidase-producing, fermentative bacilli. Seven of the nine O/F substrate reactions demonstrated less than or equal to 89% agreement with conventional reactions, whereas the urea and arginine reactions provided 82 and 85% agreement, respectively. Replicate O/F tests with six selected organisms demonstrated 97% identification reproducibility and 84% overall substrate reproducibility. The mean O/F identification time was 2.6 days as compared to 3.3 days for the conventional system. Although this study suggests that the O/F system is a convenient, rapid, and accurate alternative to conventional identification methods, several modifications are recommended. PMID:372222

  16. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  17. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy.

    PubMed

    Schaufele, Fred

    2014-03-15

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  18. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  19. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  20. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  1. Measuring the accuracy and precision of quantitative coronary angiography using a digitally simulated test phantom

    NASA Astrophysics Data System (ADS)

    Morioka, Craig A.; Whiting, James S.; LeFree, Michelle T.

    1998-06-01

    Quantitative coronary angiography (QCA) diameter measurements have been used as an endpoint measurement in clinical studies involving therapies to reduce coronary atherosclerosis. The accuracy and precision of the QCA measure can affect the sample size and study conclusions of a clinical study. Measurements using x-ray test phantoms can underestimate the precision and accuracy of the actual arteries in clinical digital angiograms because they do not contain complex patient structures. Determining the clinical performance of QCA algorithms under clinical conditions is difficult because: (1) no gold standard test object exists in clinical images, (2) phantom images do not have any structured background noise. We purpose the use of computer simulated arteries as a replacement for traditional angiographic test phantoms to evaluate QCA algorithm performance.

  2. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  3. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  4. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  5. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  6. Increasing the precision and accuracy of top-loading balances:  application of experimental design.

    PubMed

    Bzik, T J; Henderson, P B; Hobbs, J P

    1998-01-01

    The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600

  7. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  8. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  9. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    NASA Astrophysics Data System (ADS)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  10. Accuracy and precision of quantitative 31P-MRS measurements of human skeletal muscle mitochondrial function.

    PubMed

    Layec, Gwenael; Gifford, Jayson R; Trinity, Joel D; Hart, Corey R; Garten, Ryan S; Park, Song Y; Le Fur, Yann; Jeong, Eun-Kee; Richardson, Russell S

    2016-08-01

    Although theoretically sound, the accuracy and precision of (31)P-magnetic resonance spectroscopy ((31)P-MRS) approaches to quantitatively estimate mitochondrial capacity are not well documented. Therefore, employing four differing models of respiratory control [linear, kinetic, and multipoint adenosine diphosphate (ADP) and phosphorylation potential], this study sought to determine the accuracy and precision of (31)P-MRS assessments of peak mitochondrial adenosine-triphosphate (ATP) synthesis rate utilizing directly measured peak respiration (State 3) in permeabilized skeletal muscle fibers. In 23 subjects of different fitness levels, (31)P-MRS during a 24-s maximal isometric knee extension and high-resolution respirometry in muscle fibers from the vastus lateralis was performed. Although significantly correlated with State 3 respiration (r = 0.72), both the linear (45 ± 13 mM/min) and phosphorylation potential (47 ± 16 mM/min) models grossly overestimated the calculated in vitro peak ATP synthesis rate (P < 0.05). Of the ADP models, the kinetic model was well correlated with State 3 respiration (r = 0.72, P < 0.05), but moderately overestimated ATP synthesis rate (P < 0.05), while the multipoint model, although being somewhat less well correlated with State 3 respiration (r = 0.55, P < 0.05), most accurately reflected peak ATP synthesis rate. Of note, the PCr recovery time constant (τ), a qualitative index of mitochondrial capacity, exhibited the strongest correlation with State 3 respiration (r = 0.80, P < 0.05). Therefore, this study reveals that each of the (31)P-MRS data analyses, including PCr τ, exhibit precision in terms of mitochondrial capacity. As only the multipoint ADP model did not overstimate the peak skeletal muscle mitochondrial ATP synthesis, the multipoint ADP model is the only quantitative approach to exhibit both accuracy and precision. PMID:27302751

  11. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  12. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  13. Accuracy of 3D white light scanning of abutment teeth impressions: evaluation of trueness and precision

    PubMed Central

    Jeon, Jin-Hun; Kim, Hae-Young; Kim, Ji-Hwan

    2014-01-01

    PURPOSE This study aimed to evaluate the accuracy of digitizing dental impressions of abutment teeth using a white light scanner and to compare the findings among teeth types. MATERIALS AND METHODS To assess precision, impressions of the canine, premolar, and molar prepared to receive all-ceramic crowns were repeatedly scanned to obtain five sets of 3-D data (STL files). Point clouds were compared and error sizes were measured (n=10 per type). Next, to evaluate trueness, impressions of teeth were rotated by 10°-20° and scanned. The obtained data were compared with the first set of data for precision assessment, and the error sizes were measured (n=5 per type). The Kruskal-Wallis test was performed to evaluate precision and trueness among three teeth types, and post-hoc comparisons were performed using the Mann-Whitney U test with Bonferroni correction (α=.05). RESULTS Precision discrepancies for the canine, premolar, and molar were 3.7 µm, 3.2 µm, and 7.3 µm, respectively, indicating the poorest precision for the molar (P<.001). Trueness discrepancies for teeth types were 6.2 µm, 11.2 µm, and 21.8 µm, respectively, indicating the poorest trueness for the molar (P=.007). CONCLUSION In respect to accuracy the molar showed the largest discrepancies compared with the canine and premolar. Digitizing of dental impressions of abutment teeth using a white light scanner was assessed to be a highly accurate method and provided discrepancy values in a clinically acceptable range. Further study is needed to improve digitizing performance of white light scanning in axial wall. PMID:25551007

  14. Accuracy and precision of MR blood oximetry based on the long paramagnetic cylinder approximation of large vessels.

    PubMed

    Langham, Michael C; Magland, Jeremy F; Epstein, Charles L; Floyd, Thomas F; Wehrli, Felix W

    2009-08-01

    An accurate noninvasive method to measure the hemoglobin oxygen saturation (%HbO(2)) of deep-lying vessels without catheterization would have many clinical applications. Quantitative MRI may be the only imaging modality that can address this difficult and important problem. MR susceptometry-based oximetry for measuring blood oxygen saturation in large vessels models the vessel as a long paramagnetic cylinder immersed in an external field. The intravascular magnetic susceptibility relative to surrounding muscle tissue is a function of oxygenated hemoglobin (HbO(2)) and can be quantified with a field-mapping pulse sequence. In this work, the method's accuracy and precision was investigated theoretically on the basis of an analytical expression for the arbitrarily oriented cylinder, as well as experimentally in phantoms and in vivo in the femoral artery and vein at 3T field strength. Errors resulting from vessel tilt, noncircularity of vessel cross-section, and induced magnetic field gradients were evaluated and methods for correction were designed and implemented. Hemoglobin saturation was measured at successive vessel segments, differing in geometry, such as eccentricity and vessel tilt, but constant blood oxygen saturation levels, as a means to evaluate measurement consistency. The average standard error and coefficient of variation of measurements in phantoms were <2% with tilt correction alone, in agreement with theory, suggesting that high accuracy and reproducibility can be achieved while ignoring noncircularity for tilt angles up to about 30 degrees . In vivo, repeated measurements of %HbO(2) in the femoral vessels yielded a coefficient of variation of less than 5%. In conclusion, the data suggest that %HbO(2) can be measured reproducibly in vivo in large vessels of the peripheral circulation on the basis of the paramagnetic cylinder approximation of the incremental field. PMID:19526517

  15. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  16. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  17. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  18. Accuracy and Reproducibility in Quantification of Plasma Protein Concentrations by Mass Spectrometry without the Use of Isotopic Standards

    PubMed Central

    Kramer, Gertjan; Woolerton, Yvonne; van Straalen, Jan P.; Vissers, Johannes P. C.; Dekker, Nick; Langridge, James I.; Beynon, Robert J.; Speijer, Dave; Sturk, Auguste; Aerts, Johannes M. F. G.

    2015-01-01

    Background Quantitative proteomic analysis with mass spectrometry holds great promise for simultaneously quantifying proteins in various biosamples, such as human plasma. Thus far, studies addressing the reproducible measurement of endogenous protein concentrations in human plasma have focussed on targeted analyses employing isotopically labelled standards. Non-targeted proteomics, on the other hand, has been less employed to this end, even though it has been instrumental in discovery proteomics, generating large datasets in multiple fields of research. Results Using a non-targeted mass spectrometric assay (LCMSE), we quantified abundant plasma proteins (43 mg/mL—40 ug/mL range) in human blood plasma specimens from 30 healthy volunteers and one blood serum sample (ProteomeXchange: PXD000347). Quantitative results were obtained by label-free mass spectrometry using a single internal standard to estimate protein concentrations. This approach resulted in quantitative results for 59 proteins (cut off ≥11 samples quantified) of which 41 proteins were quantified in all 31 samples and 23 of these with an inter-assay variability of ≤ 20%. Results for 7 apolipoproteins were compared with those obtained using isotope-labelled standards, while 12 proteins were compared to routine immunoassays. Comparison of quantitative data obtained by LCMSE and immunoassays showed good to excellent correlations in relative protein abundance (r = 0.72–0.96) and comparable median concentrations for 8 out of 12 proteins tested. Plasma concentrations of 56 proteins determined by LCMSE were of similar accuracy as those reported by targeted studies and 7 apolipoproteins quantified by isotope-labelled standards, when compared to reference concentrations from literature. Conclusions This study shows that LCMSE offers good quantification of relative abundance as well as reasonable estimations of concentrations of abundant plasma proteins. PMID:26474480

  19. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  20. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  1. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  2. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  3. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  4. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  5. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  6. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  7. Effects of shortened acquisition time on accuracy and precision of quantitative estimates of organ activity1

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Purpose: Quantitative estimation of in vivo organ uptake is an essential part of treatment planning for targeted radionuclide therapy. This usually involves the use of planar or SPECT scans with acquisition times chosen based more on image quality considerations rather than the minimum needed for precise quantification. In previous simulation studies at clinical count levels (185 MBq 111In), the authors observed larger variations in accuracy of organ activity estimates resulting from anatomical and uptake differences than statistical noise. This suggests that it is possible to reduce the acquisition time without substantially increasing the variation in accuracy. Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population with realistic variations in anatomy and biodistribution was used to model variability in a patient population. Planar and SPECT projections were simulated using previously validated Monte Carlo simulation tools. The authors simulated the projections at count levels approximately corresponding to 1.5–30 min of total acquisition time. The projections were processed using previously described quantitative SPECT (QSPECT) and planar (QPlanar) methods. The QSPECT method was based on the OS-EM algorithm with compensations for attenuation, scatter, and collimator-detector response. The QPlanar method is based on the ML-EM algorithm using the same model-based compensation for all the image degrading effects as the QSPECT method. The volumes of interests (VOIs) were defined based on the true organ configuration in the phantoms. The errors in organ activity estimates from different count levels and processing methods were compared in terms of mean and standard deviation over the simulated phantom population. Results: There was little degradation in quantitative reliability when the acquisition time was

  8. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  9. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  10. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  11. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGESBeta

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  12. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  13. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot. PMID:24529754

  14. Evaluation of precision and accuracy of the Borgwaldt RM20S(®) smoking machine designed for in vitro exposure.

    PubMed

    Kaur, Navneet; Lacasse, Martine; Roy, Jean-Philippe; Cabral, Jean-Louis; Adamson, Jason; Errington, Graham; Waldron, Karen C; Gaça, Marianna; Morin, André

    2010-12-01

    The Borgwaldt RM20S(®) smoking machine enables the generation, dilution, and transfer of fresh cigarette smoke to cell exposure chambers, for in vitro analyses. We present a study confirming the precision (repeatability r, reproducibility R) and accuracy of smoke dose generated by the Borgwaldt RM20S(®) system and delivery to exposure chambers. Due to the aerosol nature of cigarette smoke, the repeatability of the dilution of the vapor phase in air was assessed by quantifying two reference standard gases: methane (CH(4), r between 29.0 and 37.0 and RSD between 2.2% and 4.5%) and carbon monoxide (CO, r between 166.8 and 235.8 and RSD between 0.7% and 3.7%). The accuracy of dilution (percent error) for CH(4) and CO was between 6.4% and 19.5% and between 5.8% and 6.4%, respectively, over a 10-1000-fold dilution range. To corroborate our findings, a small inter-laboratory study was carried out for CH(4) measurements. The combined dilution repeatability had an r between 21.3 and 46.4, R between 52.9 and 88.4, RSD between 6.3% and 17.3%, and error between 4.3% and 13.1%. Based on the particulate component of cigarette smoke (3R4F), the repeatability (RSD = 12%) of the undiluted smoke generated by the Borgwaldt RM20S(®) was assessed by quantifying solanesol using high-performance liquid chromatography with ultraviolet detection (HPLC/UV). Finally, the repeatability (r between 0.98 and 4.53 and RSD between 8.8% and 12%) of the dilution of generated smoke particulate phase was assessed by quantifying solanesol following various dilutions of cigarette smoke. The findings in this study suggest the Borgwaldt RM20S(®) smoking machine is a reliable tool to generate and deliver repeatable and reproducible doses of whole smoke to in vitro cultures. PMID:21126153

  15. Numerical reproducibility for implicit Monte Carlo simulations

    SciTech Connect

    Cleveland, M.; Brunner, T.; Gentile, N.

    2013-07-01

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. In [1], a way of eliminating this roundoff error using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. A non-arbitrary precision approaches required a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step. (authors)

  16. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  17. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point

  18. Multi-site assessment of the precision and reproducibility of multiple reaction monitoring-based measurements of proteins in plasma.

    PubMed

    Addona, Terri A; Abbatiello, Susan E; Schilling, Birgit; Skates, Steven J; Mani, D R; Bunk, David M; Spiegelman, Clifford H; Zimmerman, Lisa J; Ham, Amy-Joan L; Keshishian, Hasmik; Hall, Steven C; Allen, Simon; Blackman, Ronald K; Borchers, Christoph H; Buck, Charles; Cardasis, Helene L; Cusack, Michael P; Dodder, Nathan G; Gibson, Bradford W; Held, Jason M; Hiltke, Tara; Jackson, Angela; Johansen, Eric B; Kinsinger, Christopher R; Li, Jing; Mesri, Mehdi; Neubert, Thomas A; Niles, Richard K; Pulsipher, Trenton C; Ransohoff, David; Rodriguez, Henry; Rudnick, Paul A; Smith, Derek; Tabb, David L; Tegeler, Tony J; Variyath, Asokan M; Vega-Montoto, Lorenzo J; Wahlander, Asa; Waldemarson, Sofia; Wang, Mu; Whiteaker, Jeffrey R; Zhao, Lei; Anderson, N Leigh; Fisher, Susan J; Liebler, Daniel C; Paulovich, Amanda G; Regnier, Fred E; Tempst, Paul; Carr, Steven A

    2009-07-01

    Verification of candidate biomarkers relies upon specific, quantitative assays optimized for selective detection of target proteins, and is increasingly viewed as a critical step in the discovery pipeline that bridges unbiased biomarker discovery to preclinical validation. Although individual laboratories have demonstrated that multiple reaction monitoring (MRM) coupled with isotope dilution mass spectrometry can quantify candidate protein biomarkers in plasma, reproducibility and transferability of these assays between laboratories have not been demonstrated. We describe a multilaboratory study to assess reproducibility, recovery, linear dynamic range and limits of detection and quantification of multiplexed, MRM-based assays, conducted by NCI-CPTAC. Using common materials and standardized protocols, we demonstrate that these assays can be highly reproducible within and across laboratories and instrument platforms, and are sensitive to low mug/ml protein concentrations in unfractionated plasma. We provide data and benchmarks against which individual laboratories can compare their performance and evaluate new technologies for biomarker verification in plasma. PMID:19561596

  19. Technical note: precision and accuracy of in vitro digestion of neutral detergent fiber and predicted net energy of lactation content of fibrous feeds.

    PubMed

    Spanghero, M; Berzaghi, P; Fortina, R; Masoero, F; Rapetti, L; Zanfi, C; Tassone, S; Gallo, A; Colombini, S; Ferlito, J C

    2010-10-01

    The objective of this study was to test the precision and agreement with in situ data (accuracy) of neutral detergent fiber degradability (NDFD) obtained with the rotating jar in vitro system (Daisy(II) incubator, Ankom Technology, Fairport, NY). Moreover, the precision of the chemical assays requested by the National Research Council (2001) for feed energy calculations and the estimated net energy of lactation contents were evaluated. Precision was measured as standard deviation (SD) of reproducibility (S(R)) and repeatability (S(r)) (between- and within-laboratory variability, respectively), which were expressed as coefficients of variation (SD/mean × 100, S(R) and S(r), respectively). Ten fibrous feed samples (alfalfa dehydrated, alfalfa hay, corn cob, corn silage, distillers grains, meadow hay, ryegrass hay, soy hulls, wheat bran, and wheat straw) were analyzed by 5 laboratories. Analyses of dry matter (DM), ash, crude protein (CP), neutral detergent fiber (NDF), and acid detergent fiber (ADF) had satisfactory S(r), from 0.4 to 2.9%, and S(R), from 0.7 to 6.2%, with the exception of ether extract (EE) and CP bound to NDF or ADF. Extending the fermentation time from 30 to 48 h increased the NDFD values (from 42 to 54% on average across all tested feeds) and improved the NDFD precision, in terms of both S(r) (12 and 7% for 30 and 48 h, respectively) and S(R) (17 and 10% for 30 and 48 h, respectively). The net energy for lactation (NE(L)) predicted from 48-h incubation NDFD data approximated well the tabulated National Research Council (2001) values for several feeds, and the improvement in NDFD precision given by longer incubations (48 vs. 30 h) also improved precision of the NE(L) estimates from 11 to 8%. Data obtained from the rotating jar in vitro technique compared well with in situ data. In conclusion, the adoption of a 48-h period of incubation improves repeatability and reproducibility of NDFD and accuracy and reproducibility of the associated calculated

  20. Intra- and inter-laboratory reproducibility and accuracy of the LuSens assay: A reporter gene-cell line to detect keratinocyte activation by skin sensitizers.

    PubMed

    Ramirez, Tzutzuy; Stein, Nadine; Aumann, Alexandra; Remus, Tina; Edwards, Amber; Norman, Kimberly G; Ryan, Cindy; Bader, Jackie E; Fehr, Markus; Burleson, Florence; Foertsch, Leslie; Wang, Xiaohong; Gerberick, Frank; Beilstein, Paul; Hoffmann, Sebastian; Mehling, Annette; van Ravenzwaay, Bennard; Landsiedel, Robert

    2016-04-01

    Several non-animal methods are now available to address the key events leading to skin sensitization as defined by the adverse outcome pathway. The KeratinoSens assay addresses the cellular event of keratinocyte activation and is a method accepted under OECD TG 442D. In this study, the results of an inter-laboratory evaluation of the "me-too" LuSens assay, a bioassay that uses a human keratinocyte cell line harboring a reporter gene construct composed of the rat antioxidant response element (ARE) of the NADPH:quinone oxidoreductase 1 gene and the luciferase gene, are described. Earlier in-house validation with 74 substances showed an accuracy of 82% in comparison to human data. When used in a battery of non-animal methods, even higher predictivity is achieved. To meet European validation criteria, a multicenter study was conducted in 5 laboratories. The study was divided into two phases, to assess 1) transferability of the method, and 2) reproducibility and accuracy. Phase I was performed by testing 8 non-coded test substances; the results showed a good transferability to naïve laboratories even without on-site training. Phase II was performed with 20 coded test substances (performance standards recommended by OECD, 2015). In this phase, the intra- and inter-laboratory reproducibility as well as accuracy of the method was evaluated. The data demonstrate a remarkable reproducibility of 100% and an accuracy of over 80% in identifying skin sensitizers, indicating a good concordance with in vivo data. These results demonstrate good transferability, reliability and accuracy of the method thereby achieving the standards necessary for use in a regulatory setting to detect skin sensitizers. PMID:26796489

  1. Precision, accuracy, and application of diver-towed underwater GPS receivers.

    PubMed

    Schories, Dirk; Niedzwiedz, Gerd

    2012-04-01

    Diver-towed global positioning systems (GPS) handhelds have been used for a few years in underwater monitoring studies. We modeled the accuracy of this method using the software KABKURR originally developed by the University of Rostock for fishing and marine engineering. Additionally, three field experiments were conducted to estimate the precision of the method and apply it in the field: (1) an experiment of underwater transects from 5 to 35 m in the Southern Chile fjord region, (2) a transect from 5 to 30 m under extreme climatic conditions in the Antarctic, and (3) an underwater tracking experiment at Lake Ranco, Southern Chile. The coiled cable length in relation to water depth is the main error source besides the signal quality of the GPS under calm weather conditions. The forces used in the model resulted in a displacement of 2.3 m in a depth of 5 m, 3.2 m at a 10-m depth, 4.6 m in a 20-m depth, 5.5 m at a 30-m depth, and 6.8 m in a 40-m depth, when only an additional 0.5 m cable extension was used compared to the water depth. The GPS buoy requires good buoyancy in order to keep its position at the water surface when the diver is trying to minimize any additional cable extension error. The diver has to apply a tensile force for shortening the cable length at the lower cable end. Repeated diving along transect lines from 5 to 35 m resulted only in small deviations independent of water depth indicating the precision of the method for monitoring studies. Routing of given reference points with a Garmin 76CSx handheld placed in an underwater housing resulted in mean deviances less than 6 m at a water depth of 10 m. Thus, we can confirm that diver-towed GPS handhelds give promising results when used for underwater research in shallow water and open a wide field of applicability, but no submeter accuracy is possible due to the different error sources. PMID:21614620

  2. Precision (Repeatability and Reproducibility) and Agreement of Corneal Power Measurements Obtained by Topcon KR-1W and iTrace

    PubMed Central

    Hua, Yanjun; Xu, Zequan; Qiu, Wei; Wu, Qiang

    2016-01-01

    Purpose To evaluate the repeatability and reproducibility of corneal power measurements obtained by Topcon KR-1W and iTrace, and assess the agreement with measurements obtained by Allegro Topolyzer and IOLMaster. Methods The right eyes of 100 normal subjects were prospectively scanned 3 times using all the 4 devices. Another observer performed additional 3 consecutive scans using the Topcon KR-1W and iTrace in the same session. About one week later, the first observer repeated the measurements using the Topcon KR-1W and iTrace. The steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), J0 and J45 were analyzed. Repeatability and reproducibility of measurements were evaluated by the within-subject standard deviation (Sw), coefficient of variation (CoV), test-retest repeatability (2.77Sw), and intraclass correlation coefficient (ICC). Agreements between devices were assessed using Bland-Altman analysis and 95% limits of agreement (LoA). Results Intraobserver repeatability and interobserver and intersession reproducibility of the Ks, Kf and Km showed a CoV of no more than 0.5%, a 2.77Sw of 0.70 D or less, and an ICC of no less than 0.99. However, J0 and J45 showed poor intraobserver repeatability and interobserver and intersession reproducibility (all ICCs not greater than 0.446). Statistically significant differences existed between Topcon KR-1W and IOLMaster, Topcon KR-1W and iTrace, Topcon KR-1W and Topolyzer, iTrace and Topolyzer, iTrace and IOLMaster for Ks, Kf and Km measurements (all P < 0.05). The mean differences between Topcon KR-1W, iTrace, and the other 2 devices were small. The 95% LoA were approximately 1.0 D to 1.5 D for all measurements. Conclusions The Ks, Kf and Km obtained by Topcon KR-1W and iTrace showed excellent intraobserver repeatability and interobserver and intersession reproducibility in normal eyes. The agreement between Topcon KR-1W and Topolyzer, Topcon KR-1W and IOLMaster, iTrace and Topolyzer, iTrace and IOLMaster

  3. Quality improvement process to assess tattoo alignment, set-up accuracy and isocentre reproducibility in pelvic radiotherapy patients

    PubMed Central

    Elsner, Kelly; Francis, Kate; Hruby, George; Roderick, Stephanie

    2014-01-01

    Introduction This quality improvement study tested three methods of tattoo alignment and isocentre definition to investigate if aligning lateral tattoos to minimise pitch, roll and yaw decreased set-up error, and if defining the isocentre using the lateral tattoos for cranio-caudal (CC) position improved isocentre reproducibility. The study population was patients receiving curative external beam radiotherapy (EBRT) for prostate cancer. The results are applicable to all supine pelvic EBRT patients. Methods The three sequential cohorts recruited 11, 11 and 10 patients respectively. A data set of 20 orthogonal pairs of electronic portal images (EPI) was acquired for each patient. EPIs were matched offline to digitally reconstructed radiographs. In cohort 1, lateral tattoos were adjusted to minimise roll. The anterior tattoo was used to define the isocentre. In cohort 2, lateral tattoos were aligned to minimise roll and yaw. Isocentre was defined as per cohort 1. In cohort 3, lateral tattoos were aligned as per cohort 2 and the anterior tattoo was adjusted to minimise pitch. Isocentre was defined by the lateral tattoos for CC position and the anterior tattoo for the left–right position. Results Cohort 3 results were superior as CC systematic and random set-up errors reduced from −1.3 mm to −0.5 mm, and 3.1 mm to 1.4 mm respectively, from cohort 1 to cohort 3. Isocentre reproducibility also improved from 86.7% to 92.1% of treatment isocentres within 5 mm of the planned isocentre. Conclusion The methods of tattoo alignment and isocentre definition in cohort 3 reduced set-up errors and improved isocentre reproducibility. PMID:25598978

  4. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance

    PubMed Central

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N.; Leduc, Charles A.; Leibel, Rudolph L.

    2011-01-01

    Objective To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus™) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. Subjects and measurements Thirty lean and obese mice (body weight range 19–67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10–25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. Results In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: −0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r2=0.92 and r2=0.99 for DXA LTM and FTM, respectively; r2=0.99 and r2=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. Conclusion DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently overestimates LTM

  5. Welcome detailed data, but with a grain of salt: accuracy, precision, uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Di Baldassarre, Giuliano; Todini, Ezio

    2013-04-01

    New survey techniques are providing a huge amount of high-detailed and accurate data which can be extremely valuable for flood inundation modeling. Such data availability raises the issue of how to exploit their information content to provide reliable flood risk mapping and predictions. We think that these data should form the basis of hydraulic modelling anytime they are available. However, high expectations regarding these datasets should be tempered as some important issues should be considered. These include: the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; the computational efficiency; the false confidence that can be given by high-resolution results, as accuracy of results is not necessarily increased by higher precision. We briefly discuss these issues and existing approaches which can be used to manage high detailed data. In our opinion, methods based on sub-grid and roughness upscaling treatments would be in many instances an appropriate solution to maintain consistence with the uncertainty related to model structure and data available for model building and evaluation.

  6. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  7. Modeling precision and accuracy of a LWIR microgrid array imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Boger, James K.; Tyo, J. Scott; Ratliff, Bradley M.; Fetrow, Matthew P.; Black, Wiley T.; Kumar, Rakesh

    2005-08-01

    Long-wave infrared (LWIR) imaging is a prominent and useful technique for remote sensing applications. Moreover, polarization imaging has been shown to provide additional information about the imaged scene. However, polarization estimation requires that multiple measurements be made of each observed scene point under optically different conditions. This challenging measurement strategy makes the polarization estimates prone to error. The sources of this error differ depending upon the type of measurement scheme used. In this paper, we examine one particular measurement scheme, namely, a simultaneous multiple-measurement imaging polarimeter (SIP) using a microgrid polarizer array. The imager is composed of a microgrid polarizer masking a LWIR HgCdTe focal plane array (operating at 8.3-9.3 μm), and is able to make simultaneous modulated scene measurements. In this paper we present an analytical model that is used to predict the performance of the system in order to help interpret real results. This model is radiometrically accurate and accounts for the temperature of the camera system optics, spatial nonuniformity and drift, optical resolution and other sources of noise. This model is then used in simulation to validate it against laboratory measurements. The precision and accuracy of the SIP instrument is then studied.

  8. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized. PMID:25868451

  9. Evaluation of Precise Point Positioning accuracy under large total electron content variations in equatorial latitudes

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bilbao, I.; Moreno Monge, B.; Rodríguez-Caderot, G.; Herraiz, M.; Radicella, S. M.

    2015-01-01

    The ionosphere is one of the largest contributors to errors in GNSS positioning. Although in Precise Point Positioning (PPP) the ionospheric delay is corrected to a first order through the 'iono-free combination', significant errors may still be observed when large electron density gradients are present. To confirm this phenomenon, the temporal behavior of intense fluctuations of total electron content (TEC) and PPP altitude accuracy at equatorial latitudes are analyzed during four years of different solar activity. For this purpose, equatorial plasma irregularities are identified with periods of high rate of change of TEC (ROT). The largest ROT values are observed from 19:00 to 01:00 LT, especially around magnetic equinoxes, although some differences exist between the stations depending on their location. Highest ROT values are observed in the American and African regions. In general, large ROT events are accompanied by frequent satellite signal losses and an increase in the PPP altitude error during years 2001, 2004 and 2011. A significant increase in the PPP altitude error RMS is observed in epochs of high ROT with respect to epochs of low ROT in years 2001, 2004 and 2011, reaching up to 0.26 m in the 19:00-01:00 LT period.

  10. David Weston--Ocean science of invariant principles, total accuracy, and appropriate precision

    NASA Astrophysics Data System (ADS)

    Roebuck, Ian

    2002-11-01

    David Weston's entire professional career was as a member of the Royal Navy Scientific Service, working in the field of ocean acoustics and its applications to maritime operations. The breadth of his interests has often been remarked upon, but because of the sensitive nature of his work at the time, it was indeed much more diverse than his published papers showed. This presentation, from the successors to the laboratories he illuminated for many years, is an attempt to fill in at least some of the gaps. The presentation also focuses on the underlying scientific philosophy of David's work, rooted in the British tradition of applicable mathematics and physics. A deep appreciation of the role of invariants and dimensional methods, and awareness of the sensitivity of any models to changes to the input assumptions, was at the heart of his approach. The needs of the Navy kept him rigorous in requiring accuracy, and clear about the distinction between it and precision. Examples of these principles are included, still as relevant today as they were when he insisted on applying them 30 years ago.

  11. Sub-nm accuracy metrology for ultra-precise reflective X-ray optics

    NASA Astrophysics Data System (ADS)

    Siewert, F.; Buchheim, J.; Zeschke, T.; Brenner, G.; Kapitzki, S.; Tiedtke, K.

    2011-04-01

    The transport and monochromatization of synchrotron light from a high brilliant laser-like source to the experimental station without significant loss of brilliance and coherence is a challenging task in X-ray optics and requires optical elements of utmost accuracy. These are wave-front preserving plane mirrors with lengths of up to 1 m characterized by residual slope errors in the range of 0.05 μrad (rms) and values of 0.1 nm (rms) for micro-roughness. In the case of focusing optical elements like elliptical cylinders the required residual slope error is in the range of 0.25 μrad rms and better. In addition the alignment of optical elements is a critical and beamline performance limiting topic. Thus the characterization of ultra-precise reflective optical elements for FEL-beamline application in the free and mounted states is of significant importance. We will discuss recent results in the field of metrology achieved at the BESSY-II Optics Laboratory (BOL) of the Helmholtz Zentrum Berlin (HZB) by use of the Nanometer Optical Component Measuring Machine (NOM). Different types of mirror have been inspected by line-scan and slope mapping in the free and mounted states. Based on these results the mirror clamping of a combined mirror/grating set-up for the BL-beamlines at FLASH was improved.

  12. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  13. Measurement Precision and Accuracy of the Centre Location of AN Ellipse by Weighted Centroid Method

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2015-03-01

    Circular targets are often utilized in photogrammetry, and a circle on a plane is projected as an ellipse onto an oblique image. This paper reports an analysis conducted in order to investigate the measurement precision and accuracy of the centre location of an ellipse on a digital image by an intensity-weighted centroid method. An ellipse with a semi-major axis a, a semi-minor axis b, and a rotation angle θ of the major axis is investigated. In the study an equivalent radius r = (a2cos2θ + b2sin2θ)1/2 is adopted as a measure of the dimension of an ellipse. First an analytical expression representing a measurement error (ϵx, ϵy,) is obtained. Then variances Vx of ϵx are obtained at 1/256 pixel intervals from 0.5 to 100 pixels in r by numerical integration, because a formula representing Vx is unable to be obtained analytically when r > 0.5. The results of the numerical integration indicate that Vxwould oscillate in a 0.5 pixel cycle in r and Vx excluding the oscillation component would be inversely proportional to the cube of r. Finally an effective approximate formula of Vx from 0.5 to 100 pixels in r is obtained by least squares adjustment. The obtained formula is a fractional expression of which numerator is a fifth-degree polynomial of {r-0.5×int(2r)} expressing the oscillation component and denominator is the cube of r. Here int(x) is the function to return the integer part of the value x. Coefficients of the fifth-degree polynomial of the numerator can be expressed by a quadratic polynomial of {0.5×int(2r)+0.25}.

  14. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control consortium

    PubMed Central

    2014-01-01

    We present primary results from the Sequencing Quality Control (SEQC) project, coordinated by the United States Food and Drug Administration. Examining Illumina HiSeq, Life Technologies SOLiD and Roche 454 platforms at multiple laboratory sites using reference RNA samples with built-in controls, we assess RNA sequencing (RNA-seq) performance for junction discovery and differential expression profiling and compare it to microarray and quantitative PCR (qPCR) data using complementary metrics. At all sequencing depths, we discover unannotated exon-exon junctions, with >80% validated by qPCR. We find that measurements of relative expression are accurate and reproducible across sites and platforms if specific filters are used. In contrast, RNA-seq and microarrays do not provide accurate absolute measurements, and gene-specific biases are observed, for these and qPCR. Measurement performance depends on the platform and data analysis pipeline, and variation is large for transcript-level profiling. The complete SEQC data sets, comprising >100 billion reads (10Tb), provide unique resources for evaluating RNA-seq analyses for clinical and regulatory settings. PMID:25150838

  15. Effects of SNR on the Accuracy and Reproducibility of DTI-derived Fractional Anisotropy, Mean Diffusivity, and Principal Eigenvector Measurements at 1.5T

    PubMed Central

    Farrell, Jonathan A.D.; Landman, Bennett A.; Jones, Craig K.; Smith, Seth A.; Prince, Jerry L.; van Zijl, Peter C.M.; Mori, Susumu

    2010-01-01

    Purpose To develop an experimental protocol to calculate the precision and accuracy of fractional anisotropy (FA), mean diffusivity (MD), and the orientation of the principal eigenvector (PEV) as a function of the signal to noise ratio (SNR) in vivo. Materials and Methods A healthy male volunteer was scanned in three separate scanning sessions, yielding a total of 45 DTI scans. To provide FA, MD, and PEV as a function of SNR, sequential scans from a scan session were grouped into non-intersecting sets. Analysis of the accuracy and precision of the DTI-derived contrasts was done in both a voxel-wise and ROI-based manner. Results An upward bias of FA and no significant bias in MD were present as SNR decreased, confirming results from simulation-based studies. Notably, while the precision of the PEV became worse at low SNR, no bias in the PEV orientation was observed. Overall, an accurate and precise quantification of FA values in GM requires substantially more SNR than the quantification of WM FA values Conclusion This study provides guidance for FA, MD, and PEV quantification and a means to investigate the minimal detectable differences within and across scan sessions as a function of SNR. PMID:17729339

  16. Accuracy, precision and response time of consumer bimetal and digital thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Three models each of consumer instant-read bimetal and digital thermometers were tested for accuracy, precision and response time compared to a calibrated thermocouple in cooked 80 percent and 90 percent lean ground beef patties and boneless and bone-in split chicken breasts. At the recommended inse...

  17. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  18. Using statistics and software to maximize precision and accuracy in U-Pb geochronological measurements

    NASA Astrophysics Data System (ADS)

    McLean, N.; Bowring, J. F.; Bowring, S. A.

    2009-12-01

    Uncertainty in U-Pb geochronology results from a wide variety of factors, including isotope ratio determinations, common Pb corrections, initial daughter product disequilibria, instrumental mass fractionation, isotopic tracer calibration, and U decay constants and isotopic composition. The relative contribution of each depends on the proportion of radiogenic to common Pb, the measurement technique, and the quality of systematic error determinations. Random and systematic uncertainty contributions may be propagated into individual analyses or for an entire population, and must be propagated correctly to accurately interpret data. Tripoli and U-Pb_Redux comprise a new data reduction and error propagation software package that combines robust cycle measurement statistics with rigorous multivariate data analysis and presents the results graphically and interactively. Maximizing the precision and accuracy of a measurement begins with correct appraisal and codification of the systematic and random errors for each analysis. For instance, a large dataset of total procedural Pb blank analyses defines a multivariate normal distribution, describing the mean of and variation in isotopic composition (IC) that must be subtracted from each analysis. Uncertainty in the size and IC of each Pb blank is related to the (random) uncertainty in ratio measurements and the (systematic) uncertainty involved in tracer subtraction. Other sample and measurement parameters can be quantified in the same way, represented as statistical distributions that describe their uncertainty or variation, and are input into U-Pb_Redux as such before the raw sample isotope ratios are measured. During sample measurement, U-Pb_Redux and Tripoli can relay cycle data in real time, calculating a date and uncertainty for each new cycle or block. The results are presented in U-Pb_Redux as an interactive user interface with multiple visualization tools. One- and two-dimensional plots of each calculated date and

  19. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  20. Accuracy and precision of end-expiratory lung-volume measurements by automated nitrogen washout/washin technique in patients with acute respiratory distress syndrome

    PubMed Central

    2011-01-01

    Introduction End-expiratory lung volume (EELV) is decreased in acute respiratory distress syndrome (ARDS), and bedside EELV measurement may help to set positive end-expiratory pressure (PEEP). Nitrogen washout/washin for EELV measurement is available at the bedside, but assessments of accuracy and precision in real-life conditions are scant. Our purpose was to (a) assess EELV measurement precision in ARDS patients at two PEEP levels (three pairs of measurements), and (b) compare the changes (Δ) induced by PEEP for total EELV with the PEEP-induced changes in lung volume above functional residual capacity measured with passive spirometry (ΔPEEP-volume). The minimal predicted increase in lung volume was calculated from compliance at low PEEP and ΔPEEP to ensure the validity of lung-volume changes. Methods Thirty-four patients with ARDS were prospectively included in five university-hospital intensive care units. ΔEELV and ΔPEEP volumes were compared between 6 and 15 cm H2O of PEEP. Results After exclusion of three patients, variability of the nitrogen technique was less than 4%, and the largest difference between measurements was 81 ± 64 ml. ΔEELV and ΔPEEP-volume were only weakly correlated (r2 = 0.47); 95% confidence interval limits, -414 to 608 ml). In four patients with the highest PEEP (≥ 16 cm H2O), ΔEELV was lower than the minimal predicted increase in lung volume, suggesting flawed measurements, possibly due to leaks. Excluding those from the analysis markedly strengthened the correlation between ΔEELV and ΔPEEP volume (r2 = 0.80). Conclusions In most patients, the EELV technique has good reproducibility and accuracy, even at high PEEP. At high pressures, its accuracy may be limited in case of leaks. The minimal predicted increase in lung volume may help to check for accuracy. PMID:22166727

  1. Basic investigations on the performance of a normoxic polymer gel with tetrakis-hydroxy-methyl-phosphonium chloride as an oxygen scavenger: Reproducibility, accuracy, stability, and dose rate dependence

    SciTech Connect

    Bayreder, Christian; Georg, Dietmar; Moser, Ewald; Berg, Andreas

    2006-07-15

    Magnetic resonance (MR)-based polymer gel dosimetry using normoxic polymer gels, represents a new dosimetric method specially suited for high-resolution three-dimensional dosimetric problems. The aim of this study was to investigate the dose response with regard to stability, accuracy, reproducibility, and the dose rate dependence. Tetrakis-hydroxy-methyl-phosphonium chloride (THPC) is used as an oxygen scavenger, and methacrylic acid as a monomer. Accuracy, reproducibility, and dose resolution were determined for MR protocols at low spatial resolution (typical for clinical scanners), medium, and microimaging-resolution protocols at three different dose levels. The dose-response stability and preirradiation-induced variations in R2, related to the time interval between preparation and irradiation of the polymer gel, were investigated. Also postirradiation stability of the polymer gel was considered. These experiments were performed using a {sup 60}Co beam (E=1.2 MV) in a water phantom. Moreover, we investigated the dose rate dependence in the low, medium, and saturation dose region of the normoxic polymer gel using a linear accelerator at photon energy of 25 MV. MR scanning was performed on a 3 T whole body scanner (MEDSPEC 30/80, BRUKER BIOSPIN, Ettlingen, Germany) using several coils and different gradient systems adapted to the acquired spatial resolution investigated. For T2-parameter selective imaging and determination of the relaxation rate R2=1/T2, a multiple spin echo sequence with 20 equidistant echoes was used. With regard to preirradiation induced variations R2 increases significantly with the increasing time interval between the polymer gel preparation and irradiation. Only a slight increase in R2 can be observed for varying the postirradiation-time solely. The dose reproducibility at voxel volumes of about 1.4x1.4x2 mm{sup 3} is better than 2%. The accuracy strongly depends on the calibration curve. THPC represents a very effective oxygen scavenger in

  2. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. PMID:26553748

  3. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course.

    PubMed

    Sohn, Hansem; Lee, Sang-Hun

    2013-01-01

    Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing. PMID:23076112

  4. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  5. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  6. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  7. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    SciTech Connect

    Kaminski, Radoslaw; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-06-24

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3-4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample.

  8. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  9. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  10. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  11. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    PubMed Central

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods used in previous studies. In the OE and EM data collections, specific landmarks on the rigid bodies were determined using digitization. In the EM+CT and OE+CT data sets, the landmark locations were obtained from the CT images. 3D linear distances and 3D curvilinear path distances were calculated for the points. The accuracy and precision for all 4 methods were evaluated (EM, OE, EM+CT and OE+CT). In addition, results were compared with and without the CT imaging (EM vs. EM+CT, OE vs. OE+CT). All systems overestimated the actual 3D curvilinear path lengths. All systems also underestimated the actual rotation values. The accuracy of all methods was within 0.5 mm for 3D curvilinear path calculations, 0.05 mm for 3D linear distance calculations, and 0.2° for rotation calculations. In addition, Bland-Altman plots for each configuration of the systems suggest that measurements obtained from either system are repeatable and comparable. PMID:18617178

  12. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  13. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  14. The accuracy and precision of DXA for assessing body composition in team sport athletes.

    PubMed

    Bilsborough, Johann Christopher; Greenway, Kate; Opar, David; Livingstone, Steuart; Cordy, Justin; Coutts, Aaron James

    2014-01-01

    This study determined the precision of pencil and fan beam dual-energy X-ray absorptiometry (DXA) devices for assessing body composition in professional Australian Football players. Thirty-six professional Australian Football players, in two groups (fan DXA, N = 22; pencil DXA, N = 25), underwent two consecutive DXA scans. A whole body phantom with known values for fat mass, bone mineral content and fat-free soft tissue mass was also used to validate each DXA device. Additionally, the criterion phantom was scanned 20 times by each DXA to assess reliability. Test-retest reliability of DXA anthropometric measures were derived from repeated fan and pencil DXA scans. Fat-free soft tissue mass and bone mineral content from both DXA units showed strong correlations with, and trivial differences to, the criterion phantom values. Fat mass from both DXA showed moderate correlations with criterion measures (pencil: r = 0.64; fan: r = 0.67) and moderate differences with the criterion value. The limits of agreement were similar for both fan beam DXA and pencil beam DXA (fan: fat-free soft tissue mass = -1650 ± 179 g, fat mass = -357 ± 316 g, bone mineral content = 289 ± 122 g; pencil: fat-free soft tissue mass = -1701 ± 257 g, fat mass = -359 ± 326 g, bone mineral content = 177 ± 117 g). DXA also showed excellent precision for bone mineral content (coefficient of variation (%CV) fan = 0.6%; pencil = 1.5%) and fat-free soft tissue mass (%CV fan = 0.3%; pencil = 0.5%) and acceptable reliability for fat measures (%CV fan: fat mass = 2.5%, percent body fat = 2.5%; pencil: fat mass = 5.9%, percent body fat = 5.7%). Both DXA provide precise measures of fat-free soft tissue mass and bone mineral content in lean Australian Football players. DXA-derived fat-free soft tissue mass and bone mineral content are suitable for assessing body composition in lean team sport athletes. PMID:24914773

  15. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  16. Accuracy and Reproducibility of HER2 Status in Breast Cancer Using Immunohistochemistry: A Quality Control Study in Tuscany Evaluating the Impact of Updated 2013 ASCO/CAP Recommendations.

    PubMed

    Bianchi, S; Caini, S; Paglierani, M; Saieva, C; Vezzosi, V; Baroni, G; Simoni, A; Palli, D

    2015-04-01

    The correct identification of HER2-positive cases is a key point to provide the most appropriate therapy to breast cancer (BC) patients. We aimed at investigating the reproducibility and accuracy of HER2 expression by immunohistochemistry (IHC) in a selected series of 35 invasive BC cases across the pathological anatomy laboratories in Tuscany, Italy. Unstained sections of each BC case were sent to 12 participating laboratories. Pathologists were required to score according to the Food and Drug Administration (FDA) four-tier scoring system (0, 1+, 2+, 3+). Sixteen and nineteen cases were HER2 non-amplified and amplified respectively on fluorescence in situ hybridization. Among 192 readings of the 16 HER2 non-amplified samples, 153 (79.7%) were coded as 0 or 1+, 39 (20.3%) were 2+, and none was 3+ (false positive rate 0%). Among 228 readings of the 19 HER2 amplified samples, 56 (24.6%) were scored 0 or 1+, 79 (34.6%) were 2+, and 93 (40.8%) were 3+. The average sensitivity was 75.4%, ranging between 47% and 100%, and the overall false negative rate was 24.6%. Participation of pathological anatomy laboratories performing HER2 testing by IHC in external quality assurance programs should be made mandatory, as the system is able to identify laboratories with suboptimal performance that may need technical advice. Updated 2013 ASCO/CAP recommendations should be adopted as the widening of IHC 2+ "equivocal" category would improve overall accuracy of HER2 testing, as more cases would be classified in this category and, consequently, tested with an in situ hybridisation method. PMID:25367072

  17. Optic nerve head analyser and Heidelberg retina tomograph: accuracy and reproducibility of topographic measurements in a model eye and in volunteers.

    PubMed Central

    Janknecht, P; Funk, J

    1994-01-01

    The accuracy and reproducibility of the optic nerve head analyser (ONHA) and the Heidelberg retina tomograph (HRT) were compared and the performance of the HRT in measuring fundus elevations was evaluated. The coefficient of variation of three repeated measurements in a model eye and in volunteers and the relative error in a model eye was calculated. With ONHA measurements the pooled coefficient of variation in volunteers was 9.3% in measuring cup areas and 8.4% in measuring the cup volume. In a model eye the pooled coefficient of variation was 7.6% for the parameter 'cup area' and 9.9% for the parameter 'cup volume'. The pooled relative error in the model eye was 6.6% for the parameter 'cup area' and 5.1% for the parameter 'cup volume'. With HRT measurements in volunteers the pooled coefficient of variation of both the parameters 'volume below contour' and 'volume below surface' was 6.9%. In the model eye the pooled coefficient of variation was 2.4% for the 'volume below contour' and 4.1% for the parameter 'volume below surface'. The pooled relative error in the model eye was 11.3% for the 'volume below contour' and 11% for the 'volume below surface'. The pooled relative error in measuring retinal elevations in the model eye was 3.8%. The coefficient of variation was 3.5%. The accuracies of the HRT and ONHA were similar. However, as the ONHA 'cup volume' is unreliable in patients because of the design of the ONHA whereas the HRT volume parameters are reliable it seems reasonable to assume that the HRT is superior to the ONHA. Only the HRT is capable of quantifying retinal elevations. Images PMID:7803352

  18. The Precision and Accuracy of AIRS Level 1B Radiances for Climate Studies

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Gaiser, Steve; Pagano, Tom; Aumann, Hartmut

    2004-01-01

    We investigate uncertainties in the Atmospheric Infrared Sounder (AIRS) radiances based on in-flight and preflight calibration algorithms and observations. The global coverage and spectra1 resolution ((lamda)/(Delta)(lamda) 1200) of AIRS enable it to produce a data set that can be used as a climate data record over the lifetime of the instrument. Therefore, we examine the effects of the uncertainties in the calibration and the detector stability on future climate studies. The uncertainties of the parameters that go into the AIRS radiometric calibration are propagated to estimate the accuracy of the radiances and any climate data record created from AIRS measurements. The calculated radiance uncertainties are consistent with observations. Algorithm enhancements may be able to reduce the radiance uncertainties by as much as 7%. We find that the orbital variation of the gain contributes a brightness temperature bias of < 0.01 K.

  19. Quantification and visualization of carotid segmentation accuracy and precision using a 2D standardized carotid map

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Ukwatta, Eranga; Shavakh, Shadi; Fenster, Aaron

    2013-06-01

    This paper describes a framework for vascular image segmentation evaluation. Since the size of vessel wall and plaque burden is defined by the lumen and wall boundaries in vascular segmentation, these two boundaries should be considered as a pair in statistical evaluation of a segmentation algorithm. This work proposed statistical metrics to evaluate the difference of local vessel wall thickness (VWT) produced by manual and algorithm-based semi-automatic segmentation methods (ΔT) with the local segmentation standard deviation of the wall and lumen boundaries considered. ΔT was further approximately decomposed into the local wall and lumen boundary differences (ΔW and ΔL respectively) in order to provide information regarding which of the wall and lumen segmentation errors contribute more to the VWT difference. In this study, the lumen and wall boundaries in 3D carotid ultrasound images acquired for 21 subjects were each segmented five times manually and by a level-set segmentation algorithm. The (absolute) difference measures (i.e., ΔT, ΔW, ΔL and their absolute values) and the pooled local standard deviation of manually and algorithmically segmented wall and lumen boundaries were computed for each subject and represented in a 2D standardized map. The local accuracy and variability of the segmentation algorithm at each point can be quantified by the average of these metrics for the whole group of subjects and visualized on the 2D standardized map. Based on the results shown on the 2D standardized map, a variety of strategies, such as adding anchor points and adjusting weights of different forces in the algorithm, can be introduced to improve the accuracy and variability of the algorithm.

  20. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  1. Accuracy and Reproducibility of Right Ventricular Quantification in Patients with Pressure and Volume Overload Using Single-Beat Three-Dimensional Echocardiography

    PubMed Central

    Knight, Daniel S.; Grasso, Agata E.; Quail, Michael A.; Muthurangu, Vivek; Taylor, Andrew M.; Toumpanakis, Christos; Caplin, Martyn E.; Coghlan, J. Gerry; Davar, Joseph

    2015-01-01

    Background The right ventricle is a complex structure that is challenging to quantify by two-dimensional (2D) echocardiography. Unlike disk summation three-dimensional (3D) echocardiography (3DE), single-beat 3DE can acquire large volumes at high volume rates in one cardiac cycle, avoiding stitching artifacts or long breath-holds. The aim of this study was to assess the accuracy and test-retest reproducibility of single-beat 3DE for quantifying right ventricular (RV) volumes in adult populations of acquired RV pressure or volume overload, namely, pulmonary hypertension (PH) and carcinoid heart disease, respectively. Three-dimensional and 2D echocardiographic indices were also compared for identifying RV dysfunction in PH. Methods A prospective cross-sectional study was performed in 100 individuals who underwent 2D echocardiography, 3DE, and cardiac magnetic resonance imaging: 49 patients with PH, 20 with carcinoid heart disease, 11 with metastatic carcinoid tumors without cardiac involvement, and 20 healthy volunteers. Two operators performed test-retest acquisition and postprocessing for inter- and intraobserver reproducibility in 20 subjects. Results: RV single-beat 3DE was attainable in 96% of cases, with mean volume rates of 32 to 45 volumes/sec. Bland-Altman analysis of all subjects (presented as mean bias ± 95% limits of agreement) revealed good agreement for end-diastolic volume (−2.3 ± 27.4 mL) and end-systolic volume (5.2 ± 19.0 mL) measured by 3DE and cardiac magnetic resonance imaging, with a tendency to underestimate stroke volume (−7.5 ± 23.6 mL) and ejection fraction (−4.6 ± 13.8%) by 3DE. Subgroup analysis demonstrated a greater bias for volumetric underestimation, particularly in healthy volunteers (end-diastolic volume, −11.9 ± 18.0 mL; stroke volume, −11.2 ± 20.2 mL). Receiver operating characteristic curve analysis showed that 3DE-derived ejection fraction was significantly superior to 2D echocardiographic

  2. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  3. Towards the next decades of precision and accuracy in a 87Sr optical lattice clock

    NASA Astrophysics Data System (ADS)

    Martin, Michael; Lin, Yige; Swallows, Matthew; Bishof, Michael; Blatt, Sebastian; Benko, Craig; Chen, Licheng; Hirokawa, Takako; Rey, Ana Maria; Ye, Jun

    2011-05-01

    Optical lattice clocks based on ensembles of neutral atoms have the potential to operate at the highest levels of stability due to the parallel interrogation of many atoms. However, the control of systematic shifts in these systems is correspondingly difficult due to potential collisional atomic interactions. By tightly confining samples of ultracold fermionic 87Sr atoms in a two-dimensional optical lattice, as opposed to the conventional one-dimensional geometry, we increase the collisional interaction energy to be the largest relevant energy scale, thus entering the strongly interacting regime of clock operation. We show both theoretically and experimentally that this increase in interaction energy results in a paradoxical decrease in the collisional shift, reducing this key systematic to the 10-17 level. We also present work towards next- generation ultrastable lasers to attain quantum-limited clock operation, potentially enhancing clock precision by an order of magnitude. This work was supported by a grant from the ARO with funding from the DARPA OLE program, NIST, NSF, and AFOSR.

  4. Tedlar bag sampling technique for vertical profiling of carbon dioxide through the atmospheric boundary layer with high precision and accuracy.

    PubMed

    Schulz, Kristen; Jensen, Michael L; Balsley, Ben B; Davis, Kenneth; Birks, John W

    2004-07-01

    Carbon dioxide is the most important greenhouse gas other than water vapor, and its modulation by the biosphere is of fundamental importance to our understanding of global climate change. We have developed a new technique for vertical profiling of CO2 and meteorological parameters through the atmospheric boundary layer and well into the free troposphere. Vertical profiling of CO2 mixing ratios allows estimates of landscape-scale fluxes characteristic of approximately100 km2 of an ecosystem. The method makes use of a powered parachute as a platform and a new Tedlar bag air sampling technique. Air samples are returned to the ground where measurements of CO2 mixing ratios are made with high precision (< or =0.1%) and accuracy (< or =0.1%) using a conventional nondispersive infrared analyzer. Laboratory studies are described that characterize the accuracy and precision of the bag sampling technique and that measure the diffusion coefficient of CO2 through the Tedlar bag wall. The technique has been applied in field studies in the proximity of two AmeriFlux sites, and results are compared with tower measurements of CO2. PMID:15296321

  5. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  6. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  7. Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models

    NASA Astrophysics Data System (ADS)

    Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin

    2016-04-01

    This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.

  8. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  9. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  10. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  11. Video image analysis in the Australian meat industry - precision and accuracy of predicting lean meat yield in lamb carcasses.

    PubMed

    Hopkins, D L; Safari, E; Thompson, J M; Smith, C R

    2004-06-01

    A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically. PMID:22061323

  12. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99

  13. In silico instrumental response correction improves precision of label-free proteomics and accuracy of proteomics-based predictive models.

    PubMed

    Lyutvinskiy, Yaroslav; Yang, Hongqian; Rutishauser, Dorothea; Zubarev, Roman A

    2013-08-01

    In the analysis of proteome changes arising during the early stages of a biological process (e.g. disease or drug treatment) or from the indirect influence of an important factor, the biological variations of interest are often small (∼10%). The corresponding requirements for the precision of proteomics analysis are high, and this often poses a challenge, especially when employing label-free quantification. One of the main contributors to the inaccuracy of label-free proteomics experiments is the variability of the instrumental response during LC-MS/MS runs. Such variability might include fluctuations in the electrospray current, transmission efficiency from the air-vacuum interface to the detector, and detection sensitivity. We have developed an in silico post-processing method of reducing these variations, and have thus significantly improved the precision of label-free proteomics analysis. For abundant blood plasma proteins, a coefficient of variation of approximately 1% was achieved, which allowed for sex differentiation in pooled samples and ≈90% accurate differentiation of individual samples by means of a single LC-MS/MS analysis. This method improves the precision of measurements and increases the accuracy of predictive models based on the measurements. The post-acquisition nature of the correction technique and its generality promise its widespread application in LC-MS/MS-based methods such as proteomics and metabolomics. PMID:23589346

  14. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  15. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  16. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  17. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  18. Detailed data is welcome, but with a pinch of salt: Accuracy, precision, and uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, F.; Di Baldassarre, G.; Todini, E.

    2013-09-01

    New survey techniques provide a large amount of high-resolution data, which can be extremely precious for flood inundation modeling. Such data availability raises the issue as to how to exploit their information content to effectively improve flood risk mapping and predictions. In this paper, we will discuss a number of important issues which should be taken into account in works related to flood modeling. These include the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; computational efficiency; false confidence that can be given by high-resolution outputs, as accuracy is not necessarily increased by higher precision. Finally, we briefly review and discuss a number of existing approaches, such as subgrid parameterization and roughness upscaling methods, which can be used to incorporate high detailed data into flood inundation models, balancing efficiency and reliability.

  19. Assessment of accuracy and precision of 3D reconstruction of unicompartmental knee arthroplasty in upright position using biplanar radiography.

    PubMed

    Tsai, Tsung-Yuan; Dimitriou, Dimitris; Hosseini, Ali; Liow, Ming Han Lincoln; Torriani, Martin; Li, Guoan; Kwon, Young-Min

    2016-07-01

    This study aimed to evaluate the precision and accuracy of 3D reconstruction of UKA component position, contact location and lower limb alignment in standing position using biplanar radiograph. Two human specimens with 4 medial UKAs were implanted with beads for radiostereometric analysis (RSA). The specimens were frozen in standing position and CT-scanned to obtain relative positions between the beads, bones and UKA components. The specimens were then imaged using biplanar radiograph (EOS). The positions of the femur, tibia, UKA components and UKA contact locations were obtained using RSA- and EOS-based techniques. Intraclass correlation coefficient (ICC) was calculated for inter-observer reliability of the EOS technique. The average (standard deviation) of the differences between two techniques in translations and rotations were less than 0.18 (0.29) mm and 0.39° (0.66°) for UKA components. The root-mean-square-errors (RMSE) of contact location along the anterior/posterior and medial/lateral directions were 0.84mm and 0.30mm. The RMSEs of the knee rotations were less than 1.70°. The ICCs for the EOS-based segmental orientations between two raters were larger than 0.98. The results suggest the EOS-based 3D reconstruction technique can precisely determine component position, contact location and lower limb alignment for UKA patients in weight-bearing standing position. PMID:27117422

  20. THE PRECISION AND ACCURACY OF EARLY EPOCH OF REIONIZATION FOREGROUND MODELS: COMPARING MWA AND PAPER 32-ANTENNA SOURCE CATALOGS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Aguirre, James E.

    2013-05-20

    As observations of the Epoch of Reionization (EoR) in redshifted 21 cm emission begin, we assess the accuracy of the early catalog results from the Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison Wide-field Array (MWA). The MWA EoR approach derives much of its sensitivity from subtracting foregrounds to <1% precision, while the PAPER approach relies on the stability and symmetry of the primary beam. Both require an accurate flux calibration to set the amplitude of the measured power spectrum. The two instruments are very similar in resolution, sensitivity, sky coverage, and spectral range and have produced catalogs from nearly contemporaneous data. We use a Bayesian Markov Chain Monte Carlo fitting method to estimate that the two instruments are on the same flux scale to within 20% and find that the images are mostly in good agreement. We then investigate the source of the errors by comparing two overlapping MWA facets where we find that the differences are primarily related to an inaccurate model of the primary beam but also correlated errors in bright sources due to CLEAN. We conclude with suggestions for mitigating and better characterizing these effects.

  1. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  2. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  3. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  4. Evaluation of the influence of cardiac motion on the accuracy and reproducibility of longitudinal measurements and the corresponding image quality in optical frequency domain imaging: an ex vivo investigation of the optimal pullback speed.

    PubMed

    Koyama, Kohei; Yoneyama, Kihei; Mitarai, Takanobu; Kuwata, Shingo; Kongoji, Ken; Harada, Tomoo; Akashi, Yoshihiro J

    2015-08-01

    Longitudinal measurement using intravascular ultrasound is limited because the motorized pullback device assumes no cardiac motion. A newly developed intracoronary imaging modality, optical frequency domain imaging (OFDI), has higher resolution and an increased auto-pullback speed with presumably lesser susceptibility to cardiac motion artifacts during pullback for longitudinal measurement; however, it has not been fully investigated. We aimed to clarify the influence of cardiac motion on the accuracy and reproducibility of longitudinal measurements obtained using OFDI and to determine the optimal pullback speed. This ex vivo study included 31 stents deployed in the mid left anterior descending artery under phantom heartbeat and coronary flow simulation. Longitudinal stent lengths were measured twice using OFDI at three pullback speeds. Differences in stent lengths between OFDI and microscopy and between two repetitive pullbacks were assessed to determine accuracy and reproducibility. Furthermore, three-dimensional (3D) reconstruction was used for evaluating image quality. With regard to differences in stent length between OFDI and microscopy, the intraclass correlation coefficient values were 0.985, 0.994, and 0.995 at 10, 20, and 40 mm/s, respectively. With regard to reproducibility, the values were 0.995, 0.996, and 0.996 at 10, 20, and 40 mm/s, respectively. 3D reconstruction showed a superior image quality at 10 and 20 mm/s compared with that at 40 mm/s. OFDI demonstrated high accuracy and reproducibility for longitudinal stent measurements. Moreover, its accuracy and reproducibility were remarkable at a higher pullback speed. A 20-mm/s pullback speed may be optimal for clinical and research purposes. PMID:25971841

  5. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  6. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  7. Airborne Laser CO2 Column Measurements: Evaluation of Precision and Accuracy Under a Wide Range of Surface and Atmospheric Conditions

    NASA Astrophysics Data System (ADS)

    Browell, E. V.; Dobler, J. T.; Kooi, S. A.; Fenn, M. A.; Choi, Y.; Vay, S. A.; Harrison, F. W.; Moore, B.

    2011-12-01

    This paper discusses the latest flight test results of a multi-frequency intensity-modulated (IM) continuous-wave (CW) laser absorption spectrometer (LAS) that operates near 1.57 μm for remote CO2 column measurements. This IM-LAS system is under development for a future space-based mission to determine the global distribution of regional-scale CO2 sources and sinks, which is the objective of the NASA Active Sensing of CO2 Emissions during Nights, Days, and Seasons (ASCENDS) mission. A prototype of the ASCENDS system, called the Multi-frequency Fiber Laser Lidar (MFLL), has been flight tested in eleven airborne campaigns since May 2005. This paper compares the most recent results obtained during the 2010 and 2011 UC-12 and DC-8 flight tests, where MFLL remote CO2 column measurements were evaluated against airborne in situ CO2 profile measurements traceable to World Meteorological Organization standards. The major change to the MFLL system in 2011 was the implementation of several different IM modes, which could be quickly changed in flight, to directly compare the precision and accuracy of MFLL CO2 measurements in each mode. The different IM modes that were evaluated included "fixed" IM frequencies near 50, 200, and 500 kHz; frequencies changed in short time steps (Stepped); continuously swept frequencies (Swept); and a pseudo noise (PN) code. The Stepped, Swept, and PN modes were generated to evaluate the ability of these IM modes to desensitize MFLL CO2 column measurements to intervening optically thin aerosols/clouds. MFLL was flown on the NASA Langley UC-12 aircraft in May 2011 to evaluate the newly implemented IM modes and their impact on CO2 measurement precision and accuracy, and to determine which IM mode provided the greatest thin cloud rejection (TCR) for the CO2 column measurements. Within the current hardware limitations of the MFLL system, the "fixed" 50 kHz results produced similar SNR values to those found previously. The SNR decreased as expected

  8. Application of U-Pb ID-TIMS dating to the end-Triassic global crisis: testing the limits on precision and accuracy in a multidisciplinary whodunnit (Invited)

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Schaltegger, U.; Guex, J.; Bartolini, A.

    2010-12-01

    The ca. 201.4 Ma Triassic-Jurassic boundary is characterized by one of the most devastating mass-extinctions in Earth history, subsequent biologic radiation, rapid carbon cycle disturbances and enormous flood basalt volcanism (Central Atlantic Magmatic Province - CAMP). Considerable uncertainty remains regarding the temporal and causal relationship between these events though this link is important for understanding global environmental change under extreme stresses. We present ID-TIMS U-Pb zircon geochronology on volcanic ash beds from two marine sections that span the Triassic-Jurassic boundary and from the CAMP in North America. To compare the timing of the extinction with the onset of the CAMP, we assess the precision and accuracy of ID-TIMS U-Pb zircon geochronology by exploring random and systematic uncertainties, reproducibility, open-system behavior, and pre-eruptive crystallization of zircon. We find that U-Pb ID-TIMS dates on single zircons can be internally and externally reproducible at 0.05% of the age, consistent with recent experiments coordinated through the EARTHTIME network. Increased precision combined with methods alleviating Pb-loss in zircon reveals that these ash beds contain zircon that crystallized between 10^5 and 10^6 years prior to eruption. Mineral dates older than eruption ages are prone to affect all geochronologic methods and therefore new tools exploring this form of “geologic uncertainty” will lead to better time constraints for ash bed deposition. In an effort to understand zircon dates within the framework of a magmatic system, we analyzed zircon trace elements by solution ICPMS for the same volume of zircon dated by ID-TIMS. In one example we argue that zircon trace element patterns as a function of time result from a mix of xeno-, ante-, and autocrystic zircons in the ash bed, and approximate eruption age with the youngest zircon date. In a contrasting example from a suite of Cretaceous andesites, zircon trace elements

  9. Reproducible Science▿

    PubMed Central

    Casadevall, Arturo; Fang, Ferric C.

    2010-01-01

    The reproducibility of an experimental result is a fundamental assumption in science. Yet, results that are merely confirmatory of previous findings are given low priority and can be difficult to publish. Furthermore, the complex and chaotic nature of biological systems imposes limitations on the replicability of scientific experiments. This essay explores the importance and limits of reproducibility in scientific manuscripts. PMID:20876290

  10. Effect of modulation frequency bandwidth on measurement accuracy and precision for digital diffuse optical spectroscopy (dDOS)

    NASA Astrophysics Data System (ADS)

    Jung, Justin; Istfan, Raeef; Roblyer, Darren

    2014-03-01

    Near-infrared (NIR) frequency-domain Diffuse Optical Spectroscopy (DOS) is an emerging technology with a growing number of potential clinical applications. In an effort to reduce DOS system complexity and improve portability, we recently demonstrated a direct digital sampling method that utilizes digital signal generation and detection as a replacement for more traditional analog methods. In our technique, a fast analog-to-digital converter (ADC) samples the detected time-domain radio frequency (RF) waveforms at each modulation frequency in a broad-bandwidth sweep (50- 300MHz). While we have shown this method provides comparable results to other DOS technologies, the process is data intensive as digital samples must be stored and processed for each modulation frequency and wavelength. We explore here the effect of reducing the modulation frequency bandwidth on the accuracy and precision of extracted optical properties. To accomplish this, the performance of the digital DOS (dDOS) system was compared to a gold standard network analyzer based DOS system. With a starting frequency of 50MHz, the input signal of the dDOS system was swept to 100, 150, 250, or 300MHz in 4MHz increments and results were compared to full 50-300MHz networkanalyzer DOS measurements. The average errors in extracted μa and μs' with dDOS were lowest for the full 50-300MHz sweep (less than 3%) and were within 3.8% for frequency bandwidths as narrow as 50-150MHz. The errors increased to as much as 9.0% when a bandwidth of 50-100MHz was tested. These results demonstrate the possibility for reduced data collection with dDOS without critical compensation of optical property extraction.

  11. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  12. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  13. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  14. Millimeter-accuracy GPS landslide monitoring using Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution: a case study in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.

    2013-03-01

    Continuous Global Positioning System (GPS) monitoring is essential for establishing the rate and pattern of superficial movements of landslides. This study demonstrates a technique which uses a stand-alone GPS station to conduct millimeter-accuracy landslide monitoring. The Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution employed by the GIPSY/OASIS software package (V6.1.2) was applied in this study. Two-years of continuous GPS data collected at a creeping landslide were used to evaluate the accuracy of the PPP-SRPA solutions. The criterion for accuracy was the root-mean-square (RMS) of residuals of the PPP-SRPA solutions with respect to "true" landslide displacements over the two-year period. RMS is often regarded as repeatability or precision in GPS literature. However, when contrasted with a known "true" position or displacement it could be termed RMS accuracy or simply accuracy. This study indicated that the PPP-SRPA resolution can provide an accuracy of 2 to 3 mm horizontally and 8 mm vertically for 24-hour sessions with few outliers (< 1%) in the Puerto Rico region. Horizontal accuracy below 5 mm can be stably achieved with 4-hour or longer sessions if avoiding the collection of data during extreme weather conditions. Vertical accuracy below 10 mm can be achieved with 8-hour or longer sessions. This study indicates that the PPP-SRPA resolution is competitive with the conventional carrier-phase double-difference network resolution for static (longer than 4 hours) landslide monitoring while maintaining many advantages. It is evident that the PPP-SRPA method would become an attractive alternative to the conventional carrier-phase double-difference method for landslide monitoring, notably in remote areas or developing countries.

  15. High precision and high accuracy isotopic measurement of uranium using lead and thorium calibration solutions by inductively coupled plasma-multiple collector-mass spectrometry

    SciTech Connect

    Bowen, I.; Walder, A.J.; Hodgson, T.; Parrish, R.R. |

    1998-12-31

    A novel method for the high accuracy and high precision measurement of uranium isotopic composition by Inductively Coupled Plasma-Multiple Collector-Mass Spectrometry is discussed. Uranium isotopic samples are spiked with either thorium or lead for use as internal calibration reference materials. This method eliminates the necessity to periodically measure uranium standards to correct for changing mass bias when samples are measured over long time periods. This technique has generated among the highest levels of analytical precision on both the major and minor isotopes of uranium. Sample throughput has also been demonstrated to exceed Thermal Ionization Mass Spectrometry by a factor of four to five.

  16. Investigation of 3D glenohumeral displacements from 3D reconstruction using biplane X-ray images: Accuracy and reproducibility of the technique and preliminary analysis in rotator cuff tear patients.

    PubMed

    Zhang, Cheng; Skalli, Wafa; Lagacé, Pierre-Yves; Billuart, Fabien; Ohl, Xavier; Cresson, Thierry; Bureau, Nathalie J; Rouleau, Dominique M; Roy, André; Tétreault, Patrice; Sauret, Christophe; de Guise, Jacques A; Hagemeister, Nicola

    2016-08-01

    Rotator cuff (RC) tears may be associated with increased glenohumeral instability; however, this instability is difficult to quantify using currently available diagnostic tools. Recently, the three-dimensional (3D) reconstruction and registration method of the scapula and humeral head, based on sequences of low-dose biplane X-ray images, has been proposed for glenohumeral displacement assessment. This research aimed to evaluate the accuracy and reproducibility of this technique and to investigate its potential with a preliminary application comparing RC tear patients and asymptomatic volunteers. Accuracy was assessed using CT scan model registration on biplane X-ray images for five cadaveric shoulder specimens and showed differences ranging from 0.6 to 1.4mm depending on the direction of interest. Intra- and interobserver reproducibility was assessed through two operators who repeated the reconstruction of five subjects three times, allowing defining 95% confidence interval ranging from ±1.8 to ±3.6mm. Intraclass correlation coefficient varied between 0.84 and 0.98. Comparison between RC tear patients and asymptomatic volunteers showed differences of glenohumeral displacements, especially in the superoinferior direction when shoulder was abducted at 20° and 45°. This study thus assessed the accuracy of the low-dose 3D biplane X-ray reconstruction technique for glenohumeral displacement assessment and showed potential in biomechanical and clinical research. PMID:26350569

  17. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  18. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  19. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  20. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  1. Assessment of the Precision and Reproducibility of Ventricular Volume, Function and Mass Measurements with Ferumoxytol-Enhanced 4D Flow MRI

    PubMed Central

    Hanneman, Kate; Kino, Aya; Cheng, Joseph Y; Alley, Marcus T; Vasanawala, Shreyas S

    2016-01-01

    Purpose To compare the precision and inter-observer agreement of ventricular volume, function and mass quantification by three-dimensional time-resolved (4D) flow MRI relative to cine steady state free precession (SSFP). Materials and Methods With research board approval, informed consent, and HIPAA compliance, 22 consecutive patients with congenital heart disease (CHD) (10 males, 6.4±4.8 years) referred for 3T ferumoxytol-enhanced cardiac MRI were prospectively recruited. Complete ventricular coverage with standard 2D short-axis cine SSFP and whole chest coverage with axial 4D flow were obtained. Two blinded radiologists independently segmented images for left ventricular (LV) and right ventricular (RV) myocardium at end systole (ES) and end diastole (ED). Statistical analysis included linear regression, ANOVA, Bland-Altman (BA) analysis, and intra-class correlation (ICC). Results Significant positive correlations were found between 4D flow and SSFP for ventricular volumes (r = 0.808–0.972, p<0.001), ejection fraction (EF) (r = 0.900–928, p<0.001), and mass (r = 0.884–0.934, p<0.001). BA relative limits of agreement for both ventricles were between −52% to 34% for volumes, −29% to 27% for EF, and −41% to 48% for mass, with wider limits of agreement for the RV compared to the LV. There was no significant difference between techniques with respect to mean square difference of ED-ES mass for either LV (F=2.05, p=0.159) or RV (F=0.625, p=0.434). Inter-observer agreement was moderate to good with both 4D flow (ICC 0.523–0.993) and SSFP (ICC 0.619–0.982), with overlapping confidence intervals. Conclusion Quantification of ventricular volume, function and mass can be accomplished with 4D flow MRI with precision and inter-observer agreement comparable to that of cine SSFP. PMID:26871420

  2. Elusive reproducibility.

    PubMed

    Gori, Gio Batta

    2014-08-01

    Reproducibility remains a mirage for many biomedical studies because inherent experimental uncertainties generate idiosyncratic outcomes. The authentication and error rates of primary empirical data are often elusive, while multifactorial confounders beset experimental setups. Substantive methodological remedies are difficult to conceive, signifying that many biomedical studies yield more or less plausible results, depending on the attending uncertainties. Real life applications of those results remain problematic, with important exceptions for counterfactual field validations of strong experimental signals, notably for some vaccines and drugs, and for certain safety and occupational measures. It is argued that industrial, commercial and public policies and regulations could not ethically rely on unreliable biomedical results; rather, they should be rationally grounded on transparent cost-benefit tradeoffs. PMID:24882687

  3. Reproducibility in a multiprocessor system

    DOEpatents

    Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka

    2013-11-26

    Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.

  4. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  5. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  6. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  7. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  8. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES.

    PubMed

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  9. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  10. Factors influence accuracy and precision in the determination of the elemental composition of defense waste glass by ICP-emission spectrometry

    SciTech Connect

    Goode, S.R.

    1995-12-31

    The influence of instrumental factors on the accuracy and precision of the determination of the composition of glass and glass feedstock is presented. In addition, the effects of different methods of sampling, dissolution methods, and standardization procedures and their effect on the quality of the chemical analysis will also be presented. The target glass simulates the material that will be prepared by the vitrification of highly radioactive liquid defense waste. The glass and feedstock streams must be well characterized to ensure a durable glass; current models estimate a 100,000 year lifetime. The elemental composition will be determined by ICP-emission spectrometry with radiation exposure issues requiring a multielement analysis for all constituents, on a single analytical sample, using compromise conditions.

  11. Approaches for achieving long-term accuracy and precision of δ18O and δ2H for waters analyzed using laser absorption spectrometers.

    PubMed

    Wassenaar, Leonard I; Coplen, Tyler B; Aggarwal, Pradeep K

    2014-01-21

    The measurement of δ(2)H and δ(18)O in water samples by laser absorption spectroscopy (LAS) are adopted increasingly in hydrologic and environmental studies. Although LAS instrumentation is easy to use, its incorporation into laboratory operations is not as easy, owing to extensive offline data manipulation required for outlier detection, derivation and application of algorithms to correct for between-sample memory, correcting for linear and nonlinear instrumental drift, VSMOW-SLAP scale normalization, and in maintaining long-term QA/QC audits. Here we propose a series of standardized water-isotope LAS performance tests and routine sample analysis templates, recommended procedural guidelines, and new data processing software (LIMS for Lasers) that altogether enables new and current LAS users to achieve and sustain long-term δ(2)H and δ(18)O accuracy and precision for these important isotopic assays. PMID:24328223

  12. Quantifying precision and accuracy of measurements of dissolved inorganic carbon stable isotopic composition using continuous-flow isotope-ratio mass spectrometry

    PubMed Central

    Waldron, Susan; Marian Scott, E; Vihermaa, Leena E; Newton, Jason

    2014-01-01

    RATIONALE We describe an analytical procedure that allows sample collection and measurement of carbon isotopic composition (δ13CV-PDB value) and dissolved inorganic carbon concentration, [DIC], in aqueous samples without further manipulation post field collection. By comparing outputs from two different mass spectrometers, we quantify with the statistical rigour uncertainty associated with the estimation of an unknown measurement. This is rarely undertaken, but it is needed to understand the significance of field data and to interpret quality assurance exercises. METHODS Immediate acidification of field samples during collection in evacuated, pre-acidified vials removed the need for toxic chemicals to inhibit continued bacterial activity that might compromise isotopic and concentration measurements. Aqueous standards mimicked the sample matrix and avoided headspace fractionation corrections. Samples were analysed using continuous-flow isotope-ratio mass spectrometry, but for low DIC concentration the mass spectrometer response could be non-linear. This had to be corrected for. RESULTS Mass spectrometer non-linearity exists. Rather than estimating precision as the repeat analysis of an internal standard, we have adopted inverse linear calibrations to quantify the precision and 95% confidence intervals (CI) of the δ13CDIC values. The response for [DIC] estimation was always linear. For 0.05–0.5 mM DIC internal standards, however, changes in mass spectrometer linearity resulted in estimations of the precision in the δ13CVPDB value of an unknown ranging from ± 0.44‰ to ± 1.33‰ (mean values) and a mean 95% CI half-width of ±1.1–3.1‰. CONCLUSIONS Mass spectrometer non-linearity should be considered in estimating uncertainty in measurement. Similarly, statistically robust estimates of precision and accuracy should also be adopted. Such estimations do not inhibit research advances: our consideration of small-scale spatial variability at two points on a

  13. A Study of the Accuracy and Precision Among XRF, ICP-MS, and PIXE on Trace Element Analyses of Small Water Samples

    NASA Astrophysics Data System (ADS)

    Naik, Sahil; Patnaik, Ritish; Kummari, Venkata; Phinney, Lucas; Dhoubhadel, Mangal; Jesseph, Aaron; Hoffmann, William; Verbeck, Guido; Rout, Bibhudutta

    2010-10-01

    The study aimed to compare the viability, precision, and accuracy among three popular instruments - X-ray Fluorescence (XRF), Inductively Coupled Plasma Mass Spectrometer (ICP-MS), and Particle-Induced X-ray Emission (PIXE) - used to analyze the trace elemental composition of small water samples. Ten-milliliter water samples from public tap water sources in seven different localities in India (Bangalore, Kochi, Bhubaneswar, Cuttack, Puri, Hospet, and Pipili) were prepared through filtration and dilution for proper analysis. The project speculates that the ICP-MS will give the most accurate and precise trace elemental analysis, followed by PIXE and XRF. XRF will be seen as a portable and affordable instrument that can analyze samples on-site while ICP-MS is extremely accurate, and expensive option for off-site analyses. PIXE will be deemed to be too expensive and cumbersome for on-site analysis; however, laboratories with a PIXE accelerator can use the instrument to get accurate analyses.

  14. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  15. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  16. Dual-energy X-ray absorptiometry for measuring total bone mineral content in the rat: study of accuracy and precision.

    PubMed

    Casez, J P; Muehlbauer, R C; Lippuner, K; Kelly, T; Fleisch, H; Jaeger, P

    1994-07-01

    Sequential studies of osteopenic bone disease in small animals require the availability of non-invasive, accurate and precise methods to assess bone mineral content (BMC) and bone mineral density (BMD). Dual-energy X-ray absorptiometry (DXA), which is currently used in humans for this purpose, can also be applied to small animals by means of adapted software. Precision and accuracy of DXA was evaluated in 10 rats weighing 50-265 g. The rats were anesthetized with a mixture of ketamine-xylazine administrated intraperitoneally. Each rat was scanned six times consecutively in the antero-posterior incidence after repositioning using the rat whole-body software for determination of whole-body BMC and BMD (Hologic QDR 1000, software version 5.52). Scan duration was 10-20 min depending on rat size. After the last measurement, rats were sacrificed and soft tissues were removed by dermestid beetles. Skeletons were then scanned in vitro (ultra high resolution software, version 4.47). Bones were subsequently ashed and dissolved in hydrochloric acid and total body calcium directly assayed by atomic absorption spectrophotometry (TBCa[chem]). Total body calcium was also calculated from the DXA whole-body in vivo measurement (TBCa[DXA]) and from the ultra high resolution measurement (TBCa[UH]) under the assumption that calcium accounts for 40.5% of the BMC expressed as hydroxyapatite. Precision error for whole-body BMC and BMD (mean +/- S.D.) was 1.3% and 1.5%, respectively. Simple regression analysis between TBCa[DXA] or TBCa[UH] and TBCa[chem] revealed tight correlations (n = 0.991 and 0.996, respectively), with slopes and intercepts which were significantly different from 1 and 0, respectively.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7950505

  17. The accuracy and precision of two non-invasive, magnetic resonance-guided focused ultrasound-based thermal diffusivity estimation methods

    PubMed Central

    Dillon, Christopher R.; Payne, Allison; Christensen, Douglas A.; Roemer, Robert B.

    2016-01-01

    Purpose The use of correct tissue thermal diffusivity values is necessary for making accurate thermal modeling predictions during magnetic resonance-guided focused ultrasound (MRgFUS) treatment planning. This study evaluates the accuracy and precision of two non-invasive thermal diffusivity estimation methods, a Gaussian Temperature method published by Cheng and Plewes in 2002 and a Gaussian specific absorption rate (SAR) method published by Dillon et al in 2012. Materials and Methods Both methods utilize MRgFUS temperature data obtained during cooling following a short (<25s) heating pulse. The Gaussian SAR method can also use temperatures obtained during heating. Experiments were performed at low heating levels (ΔT~10°C) in ex vivo pork muscle and in vivo rabbit back muscle. The non-invasive MRgFUS thermal diffusivity estimates were compared with measurements from two standard invasive methods. Results Both non-invasive methods accurately estimate thermal diffusivity when using MR-temperature cooling data (overall ex vivo error<6%, in vivo<12%). Including heating data in the Gaussian SAR method further reduces errors (ex vivo error<2%, in vivo<3%). The significantly lower standard deviation values (p<0.03) of the Gaussian SAR method indicate that it has better precision than the Gaussian Temperature method. Conclusions With repeated sonications, either MR-based method could provide accurate thermal diffusivity values for MRgFUS therapies. Fitting to more data simultaneously likely makes the Gaussian SAR method less susceptible to noise, and using heating data helps it converge more consistently to the FUS fitting parameters and thermal diffusivity. These effects lead to the improved precision of the Gaussian SAR method. PMID:25198092

  18. Reproducibility and imputation of air toxics data.

    PubMed

    Le, Hien Q; Batterman, Stuart A; Wahl, Robert L

    2007-12-01

    Ambient air quality datasets include missing data, values below method detection limits and outliers, and the precision and accuracy of the measurements themselves are often unknown. At the same time, many analyses require continuous data sequences and assume that measurements are error-free. While a variety of data imputation and cleaning techniques are available, the evaluation of such techniques remains limited. This study evaluates the performance of these techniques for ambient air toxics measurements, a particularly challenging application, and includes the analysis of intra- and inter-laboratory precision. The analysis uses an unusually complete-dataset, consisting of daily measurements of over 70 species of carbonyls and volatile organic compounds (VOCs) collected over a one year period in Dearborn, Michigan, including 122 pairs of replicates. Analysis was restricted to compounds found above detection limits in > or =20% of the samples. Outliers were detected using the Gumbell extreme value distribution. Error models for inter- and intra-laboratory reproducibility were derived from replicate samples. Imputation variables were selected using a generalized additive model, and the performance of two techniques, multiple imputation and optimal linear estimation, was evaluated for three missingness patterns. Many species were rarely detected or had very poor reproducibility. Error models developed for seven carbonyls showed median intra- and inter-laboratory errors of 22% and 25%, respectively. Better reproducibility was seen for the 16 VOCs meeting detection and reproducibility criteria. Imputation performance depended on the compound and missingness pattern. Data missing at random could be adequately imputed, but imputations for row-wise deletions, the most common type of missingness pattern encountered, were not informative. The analysis shows that air toxics data require significant efforts to identify and mitigate errors, outliers and missing observations

  19. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  20. Bias, precision and accuracy in the estimation of cuticular and respiratory water loss: a case study from a highly variable cockroach, Perisphaeria sp.

    PubMed

    Gray, Emilie M; Chown, Steven L

    2008-01-01

    We compared the precision, bias and accuracy of two techniques that were recently proposed to estimate the contributions of cuticular and respiratory water loss to total water loss in insects. We performed measurements of VCO2 and VH2O in normoxia, hyperoxia and anoxia using flow through respirometry on single individuals of the highly variable cockroach Perisphaeria sp. to compare estimates of cuticular and respiratory water loss (CWL and RWL) obtained by the VH2O-VCO2 y-intercept method with those obtained by the hyperoxic switch method. Precision was determined by assessing the repeatability of values obtained whereas bias was assessed by comparing the methods' results to each other and to values for other species found in the literature. We found that CWL was highly repeatable by both methods (R0.88) and resulted in similar values to measures of CWL determined during the closed-phase of discontinuous gas exchange (DGE). Repeatability of RWL was much lower (R=0.40) and significant only in the case of the hyperoxic method. RWL derived from the hyperoxic method is higher (by 0.044 micromol min(-1)) than that obtained from the method traditionally used for measuring water loss during the closed-phase of DGE, suggesting that in the past RWL may have been underestimated. The very low cuticular permeability of this species (3.88 microg cm(-2) h(-1) Torr(-1)) is reasonable given the seasonally hot and dry habitat where it lives. We also tested the hygric hypothesis proposed to account for the evolution of discontinuous gas exchange cycles and found no effect of respiratory pattern on RWL, although the ratio of mean VH2O to VCO2 was higher for continuous patterns compared with discontinuous ones. PMID:17949739

  1. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  2. High-Precision Surface Inspection: Uncertainty Evaluation within an Accuracy Range of 15μm with Triangulation-based Laser Line Scanners

    NASA Astrophysics Data System (ADS)

    Dupuis, Jan; Kuhlmann, Heiner

    2014-06-01

    Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.

  3. Contextual sensitivity in scientific reproducibility.

    PubMed

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-01

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility. PMID:27217556

  4. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  5. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  6. Accuracy and precision of porosity estimates based on velocity inversion of surface ground-penetrating radar data: A controlled experiment at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Bradford, J.; Clement, W.

    2006-12-01

    Although rarely acquired, ground penetrating radar (GPR) data acquired in continuous multi-offset geometries can substantially improve our understanding of the subsurface compared to conventional single offset surveys. This improvement arises because multi-offset data enable full use of the information that the GPR signal can carry. The added information allows us to maximize the material property information extracted from a GPR survey. Of the array of potential multi-offset GPR measurements, traveltime vs offset information enables laterally and vertically continuous electromagnetic (EM) velocity measurements. In turn, the EM velocities provide estimates of water content via petrophysical relationships such as the CRIM or Topp's equations. In fully saturated media the water content is a direct measure of bulk porosity. The Boise Hydrogeophysical Research Site (BHRS) is a experimental wellfield located in a shallow alluvial aquifer near Boise, Idaho. In July, 2006 we conducted a controlled 3D multi-offset GPR experiment at the BHRS designed to test the accuracy of state-of-the-art velocity analysis methodologies. We acquired continuous multi-offset GPR data over an approximately 20 x 30 m 3D area. The GPR system was a Sensors and Software pulseEkko Pro multichannel system with 100 MHz antennas and was configured with 4 receivers and a single transmitter. Data were acquired in off-end geometry for a total of 16 offsets with a 1 m offset interval and 1 m near offset. The data were acquired on a 1 m x 1m grid in four passes, each consisting of a 3 m range of equally spaced offsets. The survey encompassed 13 wells finished to the ~20 m depth of the unconfined aquifer. We established velocity control by acquiring vertical radar profiles (VRPs) in all 13 wells. Preliminary velocity measurements using an established method of reflection tomography were within about 1 percent of local 1D velocity distributions determined from the VRPs. Vertical velocity precision from the

  7. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-04-01

    Background and purpose - We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods - Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results - Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation - CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  8. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA)

    PubMed Central

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    Background and purpose — We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods — Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results — Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation — CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  9. The 1998-2000 SHADOZ (Southern Hemisphere ADditional OZonesondes) Tropical Ozone Climatology: Ozonesonde Precision, Accuracy and Station-to-Station Variability

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, Anne M.; McPeters, R. D.; Oltmans, S. J.; Schmidlin, F. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    As part of the SAFARI-2000 campaign, additional launches of ozonesondes were made at Irene, South Africa and at Lusaka, Zambia. These represent campaign augmentations to the SHADOZ database described in this paper. This network of 10 southern hemisphere tropical and subtropical stations, designated the Southern Hemisphere ADditional OZonesondes (SHADOZ) project and established from operational sites, provided over 1000 profiles from ozonesondes and radiosondes during the period 1998-2000. (Since that time, two more stations, one in southern Africa, have joined SHADOZ). Archived data are available at: http://code9l6.gsfc.nasa.gov/Data-services/shadoz>. Uncertainties and accuracies within the SHADOZ ozone data set are evaluated by analyzing: (1) imprecisions in stratospheric ozone profiles and in methods of extrapolating ozone above balloon burst; (2) comparisons of column-integrated total ozone from sondes with total ozone from the Earth-Probe/TOMS (Total Ozone Mapping Spectrometer) satellite and ground-based instruments; (3) possible biases from station-to-station due to variations in ozonesonde characteristics. The key results are: (1) Ozonesonde precision is 5%; (2) Integrated total ozone column amounts from the sondes are in good agreement (2-10%) with independent measurements from ground-based instruments at five SHADOZ sites and with overpass measurements from the TOMS satellite (version 7 data). (3) Systematic variations in TOMS-sonde offsets and in groundbased-sonde offsets from station to station reflect biases in sonde technique as well as in satellite retrieval. Discrepancies are present in both stratospheric and tropospheric ozone. (4) There is evidence for a zonal wave-one pattern in total and tropospheric ozone, but not in stratospheric ozone.

  10. Analysis of the accuracy and precision of the McMaster method in detection of the eggs of Toxocara and Trichuris species (Nematoda) in dog faeces.

    PubMed

    Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew

    2013-07-01

    The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification. PMID:23951934

  11. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  12. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  13. The effects of temporal-precision and time-minimization constraints on the spatial and temporal accuracy of aimed hand movements.

    PubMed

    Carlton, L G

    1994-03-01

    Discrete aimed hand movements, made by subjects given temporal-accuracy and time-minimization task instructions, were compared. Movements in the temporal-accuracy task were made to a point target with a goal movement time of 400 ms. A circular target then was manufactured that incorporated the measured spatial errors from the temporal-accuracy task, and subjects attempted to contact the target with a minimum movement time and without missing the circular target (time-minimization task instructions). This procedure resulted in equal movement amplitude and approximately equal spatial accuracy for the two task instructions. Movements under the time-minimization instructions were completed rapidly (M = 307 ms) without target misses, and tended to be made up of two submovements. In contrast, movements under temporal-accuracy instructions were made more slowly (M = 397 ms), matching the goal movement time, and were typically characterized by a single submovement. These data support the hypothesis that movement times, at a fixed movement amplitude versus target width ratio, decrease as the number of submovements increases, and that movements produced under temporal-accuracy and time-minimization have different control characteristics. These control differences are related to the linear and logarithmic speed-accuracy relations observed for temporal-accuracy and time-minimization tasks, respectively. PMID:15757833

  14. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  15. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  16. Precision Fabrication of a Large-Area Sinusoidal Surface Using a Fast-Tool-Servo Technique ─Improvement of Local Fabrication Accuracy

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Tano, Makoto; Araki, Takeshi; Kiyono, Satoshi

    This paper describes a diamond turning fabrication system for a sinusoidal grid surface. The wavelength and amplitude of the sinusoidal wave in each direction are 100µm and 100nm, respectively. The fabrication system, which is based on a fast-tool-servo (FTS), has the ability to generate the angle grid surface over an area of φ 150mm. This paper focuses on the improvement of the local fabrication accuracy. The areas considered are each approximately 1 × 1mm, and can be imaged by an interference microscope. Specific fabrication errors of the manufacturing process, caused by the round nose geometry of the diamond cutting tool and the data digitization, are successfully identified by Discrete Fourier Transform of the microscope images. Compensation processes are carried out to reduce the errors. As a result, the fabrication errors in local areas of the angle grid surface are reduced by 1/10.

  17. Preliminary assessment of the accuracy and precision of TOPEX/POSEIDON altimeter data with respect to the large-scale ocean circulation

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1994-01-01

    TOPEX/POSEIDON sea surface height measurements are examined for quantitative consistency with known elements of the oceanic general circulation and its variability. Project-provided corrections were accepted but are at tested as part of the overall results. The ocean was treated as static over each 10-day repeat cycle and maps constructed of the absolute sea surface topography from simple averages in 2 deg x 2 deg bins. A hybrid geoid model formed from a combination of the recent Joint Gravity Model-2 and the project-provided Ohio State University geoid was used to estimate the absolute topography in each 10-day period. Results are examined in terms of the annual average, seasonal average, seasonal variations, and variations near the repeat period. Conclusion are as follows: the orbit error is now difficult to observe, having been reduced to a level at or below the level of other error sources; the geoid dominates the error budget of the estimates of the absolute topography; the estimated seasonal cycle is consistent with prior estimates; shorter-period variability is dominated on the largest scales by an oscillation near 50 days in spherical harmonics Y(sup m)(sub 1)(theta, lambda) with an amplitude near 10 cm, close to the simplest alias of the M(sub 2) tide. This spectral peak and others visible in the periodograms support the hypothesis that the largest remaining time-dependent errors lie in the tidal models. Though discrepancies attribute to the geoid are within the formal uncertainties of the good estimates, removal of them is urgent for circulation studies. Current gross accuracy of the TOPEX/POSEIDON mission is in the range of 5-10 cm, distributed overbroad band of frequencies and wavenumbers. In finite bands, accuracies approach the 1-cm level, and expected improvements arising from extended mission duration should reduce these numbers by nearly an order of magnitude.

  18. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  19. Reproducing Kernels in Harmonic Spaces and Their Numerical Implementation

    NASA Astrophysics Data System (ADS)

    Nesvadba, Otakar

    2010-05-01

    In harmonic analysis such as the modelling of the Earth's gravity field, the importance of Hilbert's space of harmonic functions with the reproducing kernel is often discussed. Moreover, in case of an unbounded domain given by the exterior of the sphere or an ellipsoid, the reproducing kernel K(x,y) can be expressed analytically by means of closed formulas or by infinite series. Nevertheless, the straightforward numerical implementation of these formulas leads to dozen of problems, which are mostly connected with the floating-point arithmetic and a number representation. The contribution discusses numerical instabilities in K(x,y) and gradK(x,y) that can be overcome by employing elementary functions, in particular expm1 and log1p. Suggested evaluation scheme for reproducing kernels offers uniform formulas within the whole solution domain as well as superior speed and near-perfect accuracy (10-16 for IEC 60559 double-precision numbers) when compared with the straightforward formulas. The formulas can be easily implemented on the majority of computer platforms, especially when C standard library ISO/IEC 9899:1999 is available.

  20. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  1. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  2. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  3. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  4. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  5. Reproducible research in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  6. Magnetogastrography (MGG) Reproducibility Assessments

    NASA Astrophysics Data System (ADS)

    de la Roca-Chiapas, J. M.; Córdova, T.; Hernández, E.; Solorio, S.; Solís Ortiz, S.; Sosa, M.

    2006-09-01

    Seven healthy subjects underwent a magnetic pulse of 32 mT for 17 ms, seven times in 90 minutes. The procedure was repeated one and two weeks later. Assessments of the gastric emptying were carried out for each one of the measurements and a statistical analysis of ANOVA was performed for every group of data. The gastric emptying time was 19.22 ± 5 min. Reproducibility estimation was above 85%. Therefore, magnetogastrography seems to be an excellent technique to be implemented in routine clinical trials.

  7. Opening Reproducible Research

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  8. Precise, reproducible nano-domain engineering in lithium niobate crystals

    NASA Astrophysics Data System (ADS)

    Boes, Andreas; Sivan, Vijay; Ren, Guanghui; Yudistira, Didit; Mailis, Sakellaris; Soergel, Elisabeth; Mitchell, Arnan

    2015-07-01

    We present a technique for domain engineering the surface of lithium niobate crystals with features as small as 100 nm. A film of chromium (Cr) is deposited on the lithium niobate surface and patterned using electron beam lithography and lift-off and then irradiated with a wide diameter beam of intense visible laser light. The regions patterned with chromium are domain inverted while the uncoated regions are not affected by the irradiation. With the ability to realize nanoscale surface domains, this technique could offer an avenue for fabrication of nano-photonic and phononic devices.

  9. Precise, reproducible nano-domain engineering in lithium niobate crystals

    SciTech Connect

    Boes, Andreas Sivan, Vijay; Ren, Guanghui; Yudistira, Didit; Mitchell, Arnan; Mailis, Sakellaris; Soergel, Elisabeth

    2015-07-13

    We present a technique for domain engineering the surface of lithium niobate crystals with features as small as 100 nm. A film of chromium (Cr) is deposited on the lithium niobate surface and patterned using electron beam lithography and lift-off and then irradiated with a wide diameter beam of intense visible laser light. The regions patterned with chromium are domain inverted while the uncoated regions are not affected by the irradiation. With the ability to realize nanoscale surface domains, this technique could offer an avenue for fabrication of nano-photonic and phononic devices.

  10. Reproducing in cities.

    PubMed

    Mace, Ruth

    2008-02-01

    Reproducing in cities has always been costly, leading to lower fertility (that is, lower birth rates) in urban than in rural areas. Historically, although cities provided job opportunities, initially residents incurred the penalty of higher infant mortality, but as mortality rates fell at the end of the 19th century, European birth rates began to plummet. Fertility decline in Africa only started recently and has been dramatic in some cities. Here it is argued that both historical and evolutionary demographers are interpreting fertility declines across the globe in terms of the relative costs of child rearing, which increase to allow children to outcompete their peers. Now largely free from the fear of early death, postindustrial societies may create an environment that generates runaway parental investment, which will continue to drive fertility ever lower. PMID:18258904

  11. Reproducible Experiment Platform

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Rogozhnikov, Alex; Baranov, Alexander; Khairullin, Egor; Ustyuzhanin, Andrey

    2015-12-01

    Data analysis in fundamental sciences nowadays is an essential process that pushes frontiers of our knowledge and leads to new discoveries. At the same time we can see that complexity of those analyses increases fast due to a) enormous volumes of datasets being analyzed, b) variety of techniques and algorithms one have to check inside a single analysis, c) distributed nature of research teams that requires special communication media for knowledge and information exchange between individual researchers. There is a lot of resemblance between techniques and problems arising in the areas of industrial information retrieval and particle physics. To address those problems we propose Reproducible Experiment Platform (REP), a software infrastructure to support collaborative ecosystem for computational science. It is a Python based solution for research teams that allows running computational experiments on shared datasets, obtaining repeatable results, and consistent comparisons of the obtained results. We present some key features of REP based on case studies which include trigger optimization and physics analysis studies at the LHCb experiment.

  12. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  13. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  14. The presentation of plastic surgery visual data from 1816 to 1916: The evolution of reproducible results.

    PubMed

    Freshwater, M Felix

    2016-09-01

    All scientific data should be presented with sufficient accuracy and precision so that they can be both analyzed properly and reproduced. Visual data are the foundation upon which plastic surgeons advance knowledge. We use visual data to achieve reproducible results by discerning details of procedures and differences between pre- and post-surgery images. This review highlights how the presentation of visual data evolved from 1816, when Joseph Carpue published his book on nasal reconstruction to 1916, when Captain Harold Gillies began to treat over 2000 casualties from the Battle of the Somme. It shows the frailties of human nature that led some authors such as Carl von Graefe, Joseph Pancoast and Thomas Mutter to record inaccurate methods or results that could not be reproduced, and what measures other authors such as Eduard Zeis, Johann Dieffenbach, and Gurdon Buck took to affirm the accuracy of their results. It shows how photography gradually supplanted illustration as a reference standard. Finally, it shows the efforts that some authors and originators took to authenticate and preserve their visual data in what can be considered the forerunners of clinical registries. PMID:27453409

  15. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  16. Precision electron polarimetry

    SciTech Connect

    Chudakov, E.

    2013-11-07

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  17. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  18. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  19. Online image-guided intensity-modulated radiotherapy for prostate cancer: How much improvement can we expect? A theoretical assessment of clinical benefits and potential dose escalation by improving precision and accuracy of radiation delivery

    SciTech Connect

    Ghilezan, Michel; Yan Di . E-mail: dyan@beaumont.edu; Liang Jian; Jaffray, David; Wong, John; Martinez, Alvaro

    2004-12-01

    Purpose: To quantify the theoretical benefit, in terms of improvement in precision and accuracy of treatment delivery and in dose increase, of using online image-guided intensity-modulated radiotherapy (IG-IMRT) performed with onboard cone-beam computed tomography (CT), in an ideal setting of no intrafraction motion/deformation, in the treatment of prostate cancer. Methods and materials: Twenty-two prostate cancer patients treated with conventional radiotherapy underwent multiple serial CT scans (median 18 scans per patient) during their treatment. We assumed that these data sets were equivalent to image sets obtainable by an onboard cone-beam CT. Each patient treatment was simulated with conventional IMRT and online IG-IMRT separately. The conventional IMRT plan was generated on the basis of pretreatment CT, with a clinical target volume to planning target volume (CTV-to-PTV) margin of 1 cm, and the online IG-IMRT plan was created before each treatment fraction on the basis of the CT scan of the day, without CTV-to-PTV margin. The inverse planning process was similar for both conventional IMRT and online IG-IMRT. Treatment dose for each organ of interest was quantified, including patient daily setup error and internal organ motion/deformation. We used generalized equivalent uniform dose (EUD) to compare the two approaches. The generalized EUD (percentage) of each organ of interest was scaled relative to the prescription dose at treatment isocenter for evaluation and comparison. On the basis of bladder wall and rectal wall EUD, a dose-escalation coefficient was calculated, representing the potential increment of the treatment dose achievable with online IG-IMRT as compared with conventional IMRT. Results: With respect to radiosensitive tumor, the average EUD for the target (prostate plus seminal vesicles) was 96.8% for conventional IMRT and 98.9% for online IG-IMRT, with standard deviations (SDs) of 5.6% and 0.7%, respectively (p < 0.0001). The average EUDs of

  20. A precise spectrophotometric method for measuring sodium dodecyl sulfate concentration.

    PubMed

    Rupprecht, Kevin R; Lang, Ewa Z; Gregory, Svetoslava D; Bergsma, Janet M; Rae, Tracey D; Fishpaugh, Jeffrey R

    2015-10-01

    Sodium dodecyl sulfate (SDS) is used to denature and solubilize proteins, especially membrane and other hydrophobic proteins. A quantitative method to determine the concentration of SDS using the dye Stains-All is known. However, this method lacks the accuracy and reproducibility necessary for use with protein solutions where SDS concentration is a critical factor, so we modified this method after examining multiple parameters (solvent, pH, buffers, and light exposure). The improved method is simple to implement, robust, accurate, and (most important) precise. PMID:26150094

  1. Open Science and Research Reproducibility.

    PubMed

    Munafò, Marcus

    2016-01-01

    Many scientists, journals and funders are concerned about the low reproducibility of many scientific findings. One approach that may serve to improve the reliability and robustness of research is open science. Here I argue that the process of pre-registering study protocols, sharing study materials and data, and posting preprints of manuscripts may serve to improve quality control procedures at every stage of the research pipeline, and in turn improve the reproducibility of published work. PMID:27350794

  2. Open Science and Research Reproducibility

    PubMed Central

    Munafò, Marcus

    2016-01-01

    Many scientists, journals and funders are concerned about the low reproducibility of many scientific findings. One approach that may serve to improve the reliability and robustness of research is open science. Here I argue that the process of pre-registering study protocols, sharing study materials and data, and posting preprints of manuscripts may serve to improve quality control procedures at every stage of the research pipeline, and in turn improve the reproducibility of published work. PMID:27350794

  3. Precision digital control systems

    NASA Astrophysics Data System (ADS)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  4. Towards Reproducibility in Computational Hydrology

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    The ability to reproduce published scientific findings is a foundational principle of scientific research. Independent observation helps to verify the legitimacy of individual findings; build upon sound observations so that we can evolve hypotheses (and models) of how catchments function; and move them from specific circumstances to more general theory. The rise of computational research has brought increased focus on the issue of reproducibility across the broader scientific literature. This is because publications based on computational research typically do not contain sufficient information to enable the results to be reproduced, and therefore verified. Given the rise of computational analysis in hydrology over the past 30 years, to what extent is reproducibility, or a lack thereof, a problem in hydrology? Whilst much hydrological code is accessible, the actual code and workflow that produced and therefore documents the provenance of published scientific findings, is rarely available. We argue that in order to advance and make more robust the process of hypothesis testing and knowledge creation within the computational hydrological community, we need to build on from existing open data initiatives and adopt common standards and infrastructures to: first make code re-useable and easy to find through consistent use of metadata; second, publish well documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; finally, use unique persistent identifiers (e.g. DOIs) to reference re-useable and reproducible code, thereby clearly showing the provenance of published scientific findings. Whilst extra effort is require to make work reproducible, there are benefits to both the individual and the broader community in doing so, which will improve the credibility of the science in the face of the need for societies to adapt to changing hydrological environments.

  5. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  6. Precision Nova operations

    NASA Astrophysics Data System (ADS)

    Ehrlich, Robert B.; Miller, John L.; Saunders, Rodney L.; Thompson, Calvin E.; Weiland, Timothy L.; Laumann, Curt W.

    1995-12-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations include routine precision power balance to within 10% rms in the 'foot' and 5% rms in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 micrometer rms. We have also added a 'fail-safe chirp' system to avoid stimulated Brillouin scattering (SBS) in optical components during high energy shots.

  7. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  8. Rotary head type reproducing apparatus

    DOEpatents

    Takayama, Nobutoshi; Edakubo, Hiroo; Kozuki, Susumu; Takei, Masahiro; Nagasawa, Kenichi

    1986-01-01

    In an apparatus of the kind arranged to reproduce, with a plurality of rotary heads, an information signal from a record bearing medium having many recording tracks which are parallel to each other with the information signal recorded therein and with a plurality of different pilot signals of different frequencies also recorded one by one, one in each of the recording tracks, a plurality of different reference signals of different frequencies are simultaneously generated. A tracking error is detected by using the different reference signals together with the pilot signals which are included in signals reproduced from the plurality of rotary heads.

  9. Towards high accuracy calibration of electron backscatter diffraction systems.

    PubMed

    Mingard, Ken; Day, Austin; Maurice, Claire; Quested, Peter

    2011-04-01

    For precise orientation and strain measurements, advanced Electron Backscatter Diffraction (EBSD) techniques require both accurate calibration and reproducible measurement of the system geometry. In many cases the pattern centre (PC) needs to be determined to sub-pixel accuracy. The mechanical insertion/retraction, through the Scanning Electron Microscope (SEM) chamber wall, of the electron sensitive part of modern EBSD detectors also causes alignment and positioning problems and requires frequent monitoring of the PC. Optical alignment and lens distortion issues within the scintillator, lens and charge-coupled device (CCD) camera combination of an EBSD detector need accurate measurement for each individual EBSD system. This paper highlights and quantifies these issues and demonstrates the determination of the pattern centre using a novel shadow-casting technique with a precision of ∼10μm or ∼1/3 CCD pixel. PMID:21396526

  10. Reproducible Bioinformatics Research for Biologists

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  11. Performance reproducibility index for classification

    PubMed Central

    Yousefi, Mohammadmahdi R.; Dougherty, Edward R.

    2012-01-01

    Motivation: A common practice in biomarker discovery is to decide whether a large laboratory experiment should be carried out based on the results of a preliminary study on a small set of specimens. Consideration of the efficacy of this approach motivates the introduction of a probabilistic measure, for whether a classifier showing promising results in a small-sample preliminary study will perform similarly on a large independent sample. Given the error estimate from the preliminary study, if the probability of reproducible error is low, then there is really no purpose in substantially allocating more resources to a large follow-on study. Indeed, if the probability of the preliminary study providing likely reproducible results is small, then why even perform the preliminary study? Results: This article introduces a reproducibility index for classification, measuring the probability that a sufficiently small error estimate on a small sample will motivate a large follow-on study. We provide a simulation study based on synthetic distribution models that possess known intrinsic classification difficulties and emulate real-world scenarios. We also set up similar simulations on four real datasets to show the consistency of results. The reproducibility indices for different distributional models, real datasets and classification schemes are empirically calculated. The effects of reporting and multiple-rule biases on the reproducibility index are also analyzed. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routine and error estimation methods. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi12a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22954625

  12. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  13. Reproducibility of UAV-based earth topography reconstructions based on Structure-from-Motion algorithms

    NASA Astrophysics Data System (ADS)

    Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof

    2016-05-01

    Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.

  14. Regional cerebral blood flow utilizing the gamma camera and xenon inhalation: reproducibility and clinical applications

    SciTech Connect

    Fox, R.A.; Knuckey, N.W.; Fleay, R.F.; Stokes, B.A.; Van der Schaaf, A.; Surveyor, I.

    1985-11-01

    A modified collimator and standard gamma camera have been used to measure regional cerebral blood flow following inhalation of radioactive xenon. The collimator and a simplified analysis technique enables excellent statistical accuracy to be achieved with acceptable precision in the measurement of grey matter blood flow. The validity of the analysis was supported by computer modelling and patient measurements. Sixty-one patients with subarachnoid hemorrhage, cerebrovascular disease or dementia were retested to determine the reproducibility of our method. The measured coefficient of variation was 6.5%. Of forty-six patients who had a proven subarachnoid hemorrhage, 15 subsequently developed cerebral ischaemia. These showed a CBF of 42 +/- 6 ml X minute-1 X 100 g brain-1 compared with 49 +/- 11 ml X minute-1 X 100 g brain-1 for the remainder. There is evidence that decreasing blood flow and low initial flow correlate with the subsequent onset of cerebral ischemia.

  15. Potential of accuracy profile for method validation in inductively coupled plasma spectrochemistry

    NASA Astrophysics Data System (ADS)

    Mermet, J. M.; Granier, G.

    2012-10-01

    Method validation is usually performed over a range of concentrations for which analytical criteria must be verified. One important criterion in quantitative analysis is accuracy, i.e. the contribution of both trueness and precision. The study of accuracy over this range is called an accuracy profile and provides experimental tolerance intervals. Comparison with acceptability limits fixed by the end user defines a validity domain. This work describes the computation involved in the building of the tolerance intervals, particularly for the intermediate precision with within-laboratory experiments and for the reproducibility with interlaboratory studies. Computation is based on ISO 5725-4 and on previously published work. Moreover, the bias uncertainty is also computed to verify the bias contribution to accuracy. The various types of accuracy profile behavior are exemplified with results obtained by using ICP-MS and ICP-AES. This procedure allows the analyst to define unambiguously a validity domain for a given accuracy. However, because the experiments are time-consuming, the accuracy profile method is mainly dedicated to method validation.

  16. Precision technique for side-polished fiber fabrication

    NASA Astrophysics Data System (ADS)

    Mishakov, Gennadi V.; Sokolov, Victor I.

    2002-04-01

    The precision technique for side polishing of single-mode quartz fibers is developed. The technique comprises cutting curved groove in silica block, gluing a section of bare fiber into the groove, and subsequent grinding and polishing of the silica block/fiber assembly. We succeeded in fabricating up to six side-polished fibers in one block with effective interaction length 2-4 mm. The accuracy of polishing depth was achieved at 1 micrometers using in-situ monitoring of transmission of 1.3 micrometers laser light through the fiber. The developed technique combines high accuracy, reproducibility and low cost in commercial production. Side- polished single-mode fibers fabricated with this technique can find application as elements of Bragg grating transmission filters, narrowband reflectors, optical add/drop multiplexers, couplers, polarizers, sensors, etc.

  17. Reproducibility of airway wall thickness measurements

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Kuhnigk, Jan-Martin; Krass, Stefan; Owsijewitsch, Michael; de Hoop, Bartjan; Peitgen, Heinz-Otto

    2010-03-01

    Airway remodeling and accompanying changes in wall thickness are known to be a major symptom of chronic obstructive pulmonary disease (COPD), associated with reduced lung function in diseased individuals. Further investigation of this disease as well as monitoring of disease progression and treatment effect demand for accurate and reproducible assessment of airway wall thickness in CT datasets. With wall thicknesses in the sub-millimeter range, this task remains challenging even with today's high resolution CT datasets. To provide accurate measurements, taking partial volume effects into account is mandatory. The Full-Width-at-Half-Maximum (FWHM) method has been shown to be inappropriate for small airways1,2 and several improved algorithms for objective quantification of airway wall thickness have been proposed.1-8 In this paper, we describe an algorithm based on a closed form solution proposed by Weinheimer et al.7 We locally estimate the lung density parameter required for the closed form solution to account for possible variations of parenchyma density between different lung regions, inspiration states and contrast agent concentrations. The general accuracy of the algorithm is evaluated using basic tubular software and hardware phantoms. Furthermore, we present results on the reproducibility of the algorithm with respect to clinical CT scans, varying reconstruction kernels, and repeated acquisitions, which is crucial for longitudinal observations.

  18. Reproducibility of NIF hohlraum measurements

    NASA Astrophysics Data System (ADS)

    Moody, J. D.; Ralph, J. E.; Turnbull, D. P.; Casey, D. T.; Albert, F.; Bachmann, B. L.; Doeppner, T.; Divol, L.; Grim, G. P.; Hoover, M.; Landen, O. L.; MacGowan, B. J.; Michel, P. A.; Moore, A. S.; Pino, J. E.; Schneider, M. B.; Tipton, R. E.; Smalyuk, V. A.; Strozzi, D. J.; Widmann, K.; Hohenberger, M.

    2015-11-01

    The strategy of experimentally ``tuning'' the implosion in a NIF hohlraum ignition target towards increasing hot-spot pressure, areal density of compressed fuel, and neutron yield relies on a level of experimental reproducibility. We examine the reproducibility of experimental measurements for a collection of 15 identical NIF hohlraum experiments. The measurements include incident laser power, backscattered optical power, x-ray measurements, hot-electron fraction and energy, and target characteristics. We use exact statistics to set 1-sigma confidence levels on the variations in each of the measurements. Of particular interest is the backscatter and laser-induced hot-spot locations on the hohlraum wall. Hohlraum implosion designs typically include variability specifications [S. W. Haan et al., Phys. Plasmas 18, 051001 (2011)]. We describe our findings and compare with the specifications. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  19. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  20. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration

    2016-03-01

    Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.

  1. Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.

    NASA Astrophysics Data System (ADS)

    Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof

    2014-05-01

    A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.

  2. Flexible and precise drop test system

    NASA Astrophysics Data System (ADS)

    Stämpfli, Rolf; Brühwiler, Paul A.

    2009-11-01

    A drop test system with flexibility in the choice of falling object has been constructed and characterized. Using the guided free fall principle, the system enables the study of impacts of a large range of objects on a wide selection of anvils, with high control of the position and orientation of the object. The latter is demonstrated with falls of a standard aluminium headform in mountaineering helmets on a kerbstone anvil, for which visual inspection with a high-speed camera confirms the desired accuracy. Impacts of a flat falling body on cylindrical polystyrene foam samples are used to derive stress-strain curves for materials of different density and for multilayer samples. In this case, the effects of striker orientation and placement on the resultant data are discussed, and the reproducibility of the data serves as an additional confirmation of the accuracy of the measurement apparatus and procedures. A check on the improvement in the level of positional and orientational striking precision achievable is obtained via an inter-laboratory comparison.

  3. Making Precise Antenna Reflectors For Millimeter Wavelengths

    NASA Technical Reports Server (NTRS)

    Sharp, G. Richard; Wanhainen, Joyce S.; Ketelsen, Dean A.

    1994-01-01

    In improved method of fabrication of precise, lightweight antenna reflectors for millimeter wavelengths, required precise contours of reflecting surfaces obtained by computer numberically controlled machining of surface layers bonded to lightweight, rigid structures. Achievable precision greater than that of older, more-expensive fabrication method involving multiple steps of low- and high-temperature molding, in which some accuracy lost at each step.

  4. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  5. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  6. Precision and accuracy in fluorescent short tandem repeat DNA typing: assessment of benefits imparted by the use of allelic ladders with the AmpF/STR Profiler Plus kit.

    PubMed

    Leclair, Benoît; Frégeau, Chantal J; Bowen, Kathy L; Fourney, Ron M

    2004-03-01

    Base-calling precision of short tandem repeat (STR) allelic bands on dynamic slab-gel electrophoresis systems was evaluated. Data was collected from over 6000 population database allele peaks generated from 468 population database samples amplified with the AmpF/STR Profiler Plus (PP) kit and electrophoresed on ABD 377 DNA sequencers. Precision was measured by way of standard deviations and was shown to be essentially the same, whether using fixed or floating bin genotyping. However, the allelic ladders have proven more sensitive to electrophoretic variations than database samples, which have caused some floating bins of D18S51 to shift on occasion. This observation prompted the investigation of polyacrylamide gel formulations in order to stabilize allelic ladder migration. The results demonstrate that, although alleles comprised in allelic ladders and questioned samples run on the same gel should migrate in an identical manner, this premise needs to be verified for any given electrophoresis platform and gel formulation. We show that the compilation of base-calling data is a very informative and useful tool for assessing the performance stability of dynamic gel electrophoresis systems, stability on which depends genotyping result quality. PMID:15004837

  7. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  8. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  9. Positioning accuracy of cone-beam computed tomography in combination with a HexaPOD robot treatment table

    SciTech Connect

    Meyer, Juergen . E-mail: juergen.meyer@canterbury.ac.nz; Wilbert, Juergen; Baier, Kurt; Guckenberger, Matthias; Richter, Anne; Sauer, Otto; Flentje, Michael

    2007-03-15

    Purpose: To scrutinize the positioning accuracy and reproducibility of a commercial hexapod robot treatment table (HRTT) in combination with a commercial cone-beam computed tomography system for image-guided radiotherapy (IGRT). Methods and Materials: The mechanical stability of the X-ray volume imaging (XVI) system was tested in terms of reproducibility and with a focus on the moveable parts, i.e., the influence of kV panel and the source arm on the reproducibility and accuracy of both bone and gray value registration using a head-and-neck phantom. In consecutive measurements the accuracy of the HRTT for translational, rotational, and a combination of translational and rotational corrections was investigated. The operational range of the HRTT was also determined and analyzed. Results: The system performance of the XVI system alone was very stable with mean translational and rotational errors of below 0.2 mm and below 0.2{sup o}, respectively. The mean positioning accuracy of the HRTT in combination with the XVI system summarized over all measurements was below 0.3 mm and below 0.3{sup o} for translational and rotational corrections, respectively. The gray value match was more accurate than the bone match. Conclusion: The XVI image acquisition and registration procedure were highly reproducible. Both translational and rotational positioning errors can be corrected very precisely with the HRTT. The HRTT is therefore well suited to complement cone-beam computed tomography to take full advantage of position correction in six degrees of freedom for IGRT. The combination of XVI and the HRTT has the potential to improve the accuracy of high-precision treatments.

  10. Precise Measurement for Manufacturing

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A metrology instrument known as PhaseCam supports a wide range of applications, from testing large optics to controlling factory production processes. This dynamic interferometer system enables precise measurement of three-dimensional surfaces in the manufacturing industry, delivering speed and high-resolution accuracy in even the most challenging environments.Compact and reliable, PhaseCam enables users to make interferometric measurements right on the factory floor. The system can be configured for many different applications, including mirror phasing, vacuum/cryogenic testing, motion/modal analysis, and flow visualization.

  11. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  12. Reproducibility of muscle oxygen saturation.

    PubMed

    Thiel, C; Vogt, L; Himmelreich, H; Hübscher, M; Banzer, W

    2011-04-01

    The present study evaluated the reproducibility of tissue oxygenation in relation to oxygen consumption (VO2) across cycle exercise intensities in a test-retest design. 12 subjects (25.7±2.1 years; 24.7±1.9 kg · m(-2)) twice performed an incremental bicycle exercise protocol, while tissue oxygen saturation (StO2) in the vastus lateralis muscle was monitored by a commercially available NIRS unit and VO2 determined by an open-circuit indirect calorimetric system. Coefficients of variation across rest, workloads corresponding to 25, 50 and 75% of individual maximum capacity, and maximum load were 5.8, 4.6, 6.1, 8.0, 11.0% (StO2) and 7.6, 6.0, 3.7, 3.4, 3.1% (VO2), respectively. 95 % CI of relative test-retest differences ranged from -5.6 to +5.4% at 25% load to -17.2 to +7.5% at maximum load for StO2 and from -7.3 to +7.7% at rest to -3.3 to +3.2% at maximum load for VO2. With advancing exercise intensity, within-subject variability of StO2 was augmented, whereas VO2 variability slightly attenuated. NIRS measurements at higher workloads need to be interpreted with caution. PMID:21271493

  13. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The emphasis of this grant was focused on precision ephemerides for the Global Positioning System (GPS) satellites for geodynamics applications. During the period of this grant, major activities were in the areas of thermal force modeling, numerical integration accuracy improvement for eclipsing satellites, analysis of GIG '91 campaign data, and the Southwest Pacific campaign data analysis.

  14. Precision orbit computations for Starlette

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Williamson, R. G.

    1976-01-01

    The Starlette satellite, launched in February 1975 by the French Centre National d'Etudes Spatiales, was designed to minimize the effects of nongravitational forces and to obtain the highest possible accuracy for laser range measurements. Analyses of the first four months of global laser tracking data confirmed the stability of the orbit and the precision to which the satellite's position is established.

  15. Making neurophysiological data analysis reproducible: why and how?

    PubMed

    Delescluse, Matthieu; Franconville, Romain; Joucla, Sébastien; Lieury, Tiffany; Pouzat, Christophe

    2012-01-01

    Reproducible data analysis is an approach aiming at complementing classical printed scientific articles with everything required to independently reproduce the results they present. "Everything" covers here: the data, the computer codes and a precise description of how the code was applied to the data. A brief history of this approach is presented first, starting with what economists have been calling replication since the early eighties to end with what is now called reproducible research in computational data analysis oriented fields like statistics and signal processing. Since efficient tools are instrumental for a routine implementation of these approaches, a description of some of the available ones is presented next. A toy example demonstrates then the use of two open source software programs for reproducible data analysis: the "Sweave family" and the org-mode of emacs. The former is bound to R while the latter can be used with R, Matlab, Python and many more "generalist" data processing software. Both solutions can be used with Unix-like, Windows and Mac families of operating systems. It is argued that neuroscientists could communicate much more efficiently their results by adopting the reproducible research paradigm from their lab books all the way to their articles, thesis and books. PMID:21986476

  16. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  17. Reproducibility in density functional theory calculations of solids.

    PubMed

    Lejaeghere, Kurt; Bihlmayer, Gustav; Björkman, Torbjörn; Blaha, Peter; Blügel, Stefan; Blum, Volker; Caliste, Damien; Castelli, Ivano E; Clark, Stewart J; Dal Corso, Andrea; de Gironcoli, Stefano; Deutsch, Thierry; Dewhurst, John Kay; Di Marco, Igor; Draxl, Claudia; Dułak, Marcin; Eriksson, Olle; Flores-Livas, José A; Garrity, Kevin F; Genovese, Luigi; Giannozzi, Paolo; Giantomassi, Matteo; Goedecker, Stefan; Gonze, Xavier; Grånäs, Oscar; Gross, E K U; Gulans, Andris; Gygi, François; Hamann, D R; Hasnip, Phil J; Holzwarth, N A W; Iuşan, Diana; Jochym, Dominik B; Jollet, François; Jones, Daniel; Kresse, Georg; Koepernik, Klaus; Küçükbenli, Emine; Kvashnin, Yaroslav O; Locht, Inka L M; Lubeck, Sven; Marsman, Martijn; Marzari, Nicola; Nitzsche, Ulrike; Nordström, Lars; Ozaki, Taisuke; Paulatto, Lorenzo; Pickard, Chris J; Poelmans, Ward; Probert, Matt I J; Refson, Keith; Richter, Manuel; Rignanese, Gian-Marco; Saha, Santanu; Scheffler, Matthias; Schlipf, Martin; Schwarz, Karlheinz; Sharma, Sangeeta; Tavazza, Francesca; Thunström, Patrik; Tkatchenko, Alexandre; Torrent, Marc; Vanderbilt, David; van Setten, Michiel J; Van Speybroeck, Veronique; Wills, John M; Yates, Jonathan R; Zhang, Guo-Xu; Cottenier, Stefaan

    2016-03-25

    The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements. PMID:27013736

  18. Indirect orthodontic bonding - a modified technique for improved efficiency and precision

    PubMed Central

    Nojima, Lincoln Issamu; Araújo, Adriele Silveira; Alves, Matheus

    2015-01-01

    INTRODUCTION: The indirect bonding technique optimizes fixed appliance installation at the orthodontic office, ensuring precise bracket positioning, among other advantages. In this laboratory clinical phase, material and methods employed in creating the transfer tray are decisive to accuracy. OBJECTIVE: This article describes a simple, efficient and reproducible indirect bonding technique that allows the procedure to be carried out successfully. Variables influencing the orthodontic bonding are analyzed and discussed in order to aid professionals wishing to adopt the indirect bonding technique routinely in their clinical practice. PMID:26154464

  19. Francis M. Pipkin Award Talk - Precision Measurement with Atom Interferometry

    NASA Astrophysics Data System (ADS)

    Müller, Holger

    2015-05-01

    Atom interferometers are relatives of Young's double-slit experiment that use matter waves. They leverage light-atom interactions to masure fundamental constants, test fundamental symmetries, sense weak fields such as gravity and the gravity gradient, search for elusive ``fifth forces,'' and potentially test properties of antimatter and detect gravitational waves. We will discuss large (multiphoton-) momentum transfer that can enhance sensitivity and accuracy of atom interferometers several thousand fold. We will discuss measuring the fine structure constant to sub-part per billion precision and how it tests the standard model of particle physics. Finally, there has been interest in light bosons as candidates for dark matter and dark energy; atom interferometers have favorable sensitivity in searching for those fields. As a first step, we present our experiment ruling out chameleon fields and a broad class of other theories that would reproduce the observed dark energy density.

  20. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  1. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  2. Precision grid and hand motion for accurate needle insertion in brachytherapy

    SciTech Connect

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.; McLaughlin, Patrick W.; Shih, Albert J.

    2011-08-15

    Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on the measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2{sup 3} factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.

  3. Matter power spectrum and the challenge of percent accuracy

    NASA Astrophysics Data System (ADS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N-body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N-body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k<=1 h Mpc‑1 and to within three percent at k<=10 h Mpc‑1. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k<= 2 h Mpc‑1. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L=0.5 h‑1Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of Mp=109 h‑1Msolar is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  4. Technical issues in using robots to reproduce joint specific gait.

    PubMed

    Rosvold, J M; Darcy, S P; Peterson, R C; Achari, Y; Corr, D T; Marchuk, L L; Frank, C B; Shrive, N G; Rosvold, Joshua M; Darcy, Shon P; Peterson, Robert C; Achari, Yamini; Corr, David T; Marchuk, Linda L; Frank, Cyril B; Shrive, Nigel G

    2011-05-01

    Reproduction of the in vivo motions of joints has become possible with improvements in robot technology and in vivo measuring techniques. A motion analysis system has been used to measure the motions of the tibia and femur of the ovine stifle joint during normal gait. These in vivo motions are then reproduced with a parallel robot. To ensure that the motion of the joint is accurately reproduced and that the resulting data are reliable, the testing frame, the data acquisition system, and the effects of limitations of the testing platform need to be considered. Of the latter, the stiffness of the robot and the ability of the control system to process sequential points on the path of motion in a timely fashion for repeatable path accuracy are of particular importance. Use of the system developed will lead to a better understanding of the mechanical environment of joints and ligaments in vivo. PMID:21599101

  5. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  6. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  7. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  8. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  9. Factors affecting accuracy and precision in PET volume imaging

    SciTech Connect

    Karp, J.S.; Daube-Witherspoon, M.E.; Muehllehner, G. )

    1991-03-01

    Volume imaging positron emission tomographic (PET) scanners with no septa and a large axial acceptance angle offer several advantages over multiring PET scanners. A volume imaging scanner combines high sensitivity with fine axial sampling and spatial resolution. The fine axial sampling minimizes the partial volume effect, which affects the measured concentration of an object. Even if the size of an object is large compared to the slice spacing in a multiring scanner, significant variation in the concentration is measured as a function of the axial position of the object. With a volume imaging scanner, it is necessary to use a three-dimensional reconstruction algorithm in order to avoid variations in the axial resolution as a function of the distance from the center of the scanner. In addition, good energy resolution is needed in order to use a high energy threshold to reduce the coincident scattered radiation.

  10. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  11. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  12. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  13. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  14. Precision lattice QCD: challenges and prospects

    NASA Astrophysics Data System (ADS)

    Hashimoto, Shoji

    2013-04-01

    With Peta-flops scale computational resources, lattice QCD simulation has recently reached one of its primary goals, i.e. reproducing the low-lying hadron spectrum starting from the QCD Lagrangian. Applications to various other phenomenological quantities, for which no other way of precise theoretical calculation is available, would become the next milestone. In this talk I will provide a brief overview of the field and summarize the remaining problems to be solved before achieving the precision calculations.

  15. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  16. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  17. Precision Polarimetry for Cold Neutrons

    NASA Astrophysics Data System (ADS)

    Barron-Palos, Libertad; Bowman, J. David; Chupp, Timothy E.; Crawford, Christopher; Danagoulian, Areg; Gentile, Thomas R.; Jones, Gordon; Klein, Andreas; Penttila, Seppo I.; Salas-Bacci, Americo; Sharma, Monisha; Wilburn, W. Scott

    2007-10-01

    The abBA and PANDA experiments, currently under development, aim to measure the correlation coefficients in the polarized free neutron beta decay at the FnPB in SNS. The polarization of the neutron beam, polarized with a ^3He spin filter, has to be known with high precision in order to achieve the goal accuracy of these experiments. In the NPDGamma experiment, where a ^3He spin filter was used, it was observed that backgrounds play an important role in the precision to which the polarization can be determined. An experiment that focuses in the reduction of background sources to establish techniques and find the upper limit for the polarization accuracy with these spin filters is currently in progress at LANSCE. A description of the measurement and results will be presented.

  18. Is the Determination of Specific IgE against Components Using ISAC 112 a Reproducible Technique?

    PubMed Central

    Martínez-Aranguren, Rubén; Lizaso, María T.; Goikoetxea, María J.; García, Blanca E.; Cabrera-Freitag, Paula; Trellez, Oswaldo; Sanz, María L.

    2014-01-01

    Background The ImmunoCAP ISAC 112 is a fluoro-immunoassay that allows detection of specific IgE to 112 molecular components from 51 allergenic sources. We studied the reliability of this technique intra- and inter- assay, as well as inter-batch- and inter-laboratory-assay. Methods Twenty samples were studied, nineteen sera from polysensitized allergic patients, and the technique calibrator provided by the manufacturer (CTR02). We measured the sIgE from CTR02 and three patients' sera ten times in the same and in different assays. Furthermore, all samples were tested in two laboratories and with two batches of ISAC kit. To evaluate the accuracy of ISAC 112, we contrasted the determinations of CTR02 calibrator with their expected values by T Student test. To analyse the precision, we calculated the coefficient of variation (CV) of the 15 allergens that generate the calibration curve, and to analyse the repeatability and the reproducibility, we calculated the intraclass coefficient correlation (ICC) to each allergen. Results The results obtained for CTR02 were similar to those expected in 7 of 15 allergens that generate the calibration curve, whereas in 8 allergens the results showed significant differences. The mean CV obtained in the CTR02 determinations was of 9.4%, and the variability of sera from patients was of 22.9%. The agreement in the intra- and inter-assay analysis was very good to 94 allergens and good to one. In the inter-batch analyse, we obtained a very good agreement to 82 allergens, good to 14, moderate to 5 allergens, poor to one, and bad to 1 allergen. In the inter-laboratory analyse, we obtained a very good agreement to 73 allergens, good to 22, moderate to 6 and poor to two allergens. Conclusion The allergen microarray immunoassay, ISAC 112, is a repeatable and reproducible in vitro diagnostic tool for determination of sIgE beyond the own laboratory. PMID:24516646

  19. A High Precision Method for Quantitative Measurements of Reactive Oxygen Species in Frozen Biopsies

    PubMed Central

    Lindgren, Mikael; Gustafsson, Håkan

    2014-01-01

    Objective An electron paramagnetic resonance (EPR) technique using the spin probe cyclic hydroxylamine 1-hydroxy-3-methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine (CMH) was introduced as a versatile method for high precision quantification of reactive oxygen species, including the superoxide radical in frozen biological samples such as cell suspensions, blood or biopsies. Materials and Methods Loss of measurement precision and accuracy due to variations in sample size and shape were minimized by assembling the sample in a well-defined volume. Measurement was carried out at low temperature (150 K) using a nitrogen flow Dewar. The signal intensity was measured from the EPR 1st derivative amplitude, and related to a sample, 3-carboxy-proxyl (CP•) with known spin concentration. Results The absolute spin concentration could be quantified with a precision and accuracy better than ±10 µM (k = 1). The spin concentration of samples stored at −80°C could be reproduced after 6 months of storage well within the same error estimate. Conclusion The absolute spin concentration in wet biological samples such as biopsies, water solutions and cell cultures could be quantified with higher precision and accuracy than normally achievable using common techniques such as flat cells, tissue cells and various capillary tubes. In addition; biological samples could be collected and stored for future incubation with spin probe, and also further stored up to at least six months before EPR analysis, without loss of signal intensity. This opens for the possibility to store and transport incubated biological samples with known accuracy of the spin concentration over time. PMID:24603936

  20. Accuracy and Reliability of a New Tennis Ball Machine

    PubMed Central

    Brechbuhl, Cyril; Millet, Grégoire; Schmitt, Laurent

    2016-01-01

    The aim was to evaluate the reliability of a newly-developed ball machine named 'Hightof', on the field and to assess its accuracy. The experiment was conducted in the collaboration of the 'Hawk-Eye' technology. The accuracy and reliability of this ball machine were assessed during an incremental test, with 1 min of exercise and 30 sec of recovery, where the frequency of the balls increased from 10 to 30 balls·min-1. The initial frequency was 10 and increased by 2 until 22, then by 1 until 30 balls·min-1. The reference points for the impact were 8.39m from the net and 2.70m from lateral line for the right side and 2.83m for the left side. The precision of the machine was similar on the right and left sides (0.63 ± 0.39 vs 0.63 ± 0.34 m). The distances to the reference point were 0.52 ± 0.42, 0.26 ± 0.19, 0.52 ± 0.37, 0.28 ± 0.19 m for the Y-right, X-right, Y-left and X-left impacts. The precision was constant and did not increase with the intensity. (e.g ball frequency). The ball velocity was 86.3 ± 1.5 and 86.5 ± 1.3 km·h-1 for the right and the left side, respectively. The coefficient of variation for the velocity ranged between 1 and 2% in all stages (ball velocity ranging from 10 to 30 balls·min-1). Conclusion: both the accuracy and the reliability of this new ball machine appear satisfying enough for field testing and training. Key points The reliability and accuracy of a new ball machine named 'Hightof' were assessed. The impact point was reproducible and similar on the right and left sides (±0.63 m). The precision was constant and did not increase with the intensity (e.g ball frequency). The coefficient of variation of the ball velocity ranged between 1 and 2% in all stages (ball velocity ranging from 10 to 30 balls·min-1). PMID:27274663

  1. Fly wing vein patterns have spatial reproducibility of a single cell

    PubMed Central

    Abouchar, Laurent; Petkova, Mariela D.; Steinhardt, Cynthia R.; Gregor, Thomas

    2014-01-01

    Developmental processes in multicellular organisms occur in fluctuating environments and are prone to noise, yet they produce complex patterns with astonishing reproducibility. We measure the left–right and inter-individual precision of bilaterally symmetric fly wings across the natural range of genetic and environmental conditions and find that wing vein patterns are specified with identical spatial precision and are reproducible to within a single-cell width. The early fly embryo operates at a similar degree of reproducibility, suggesting that the overall spatial precision of morphogenesis in Drosophila performs at the single-cell level. Could development be operating at the physical limit of what a biological system can achieve? PMID:24942847

  2. Precise Fabrication of Electromagnetic-Levitation Coils

    NASA Technical Reports Server (NTRS)

    Ethridge, E.; Curreri, P.; Theiss, J.; Abbaschian, G.

    1985-01-01

    Winding copper tubing on jig ensures reproducible performance. Sequence of steps insures consistent fabrication of levitation-and-melting coils. New method enables technician to produce eight coils per day, 95 percent of them acceptable. Method employs precise step-by-step procedure on specially designed wrapping and winding jig.

  3. Reproducible research in vadose zone sciences

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A significant portion of present-day soil and Earth science research is computational, involving complex data analysis pipelines, advanced mathematical and statistical models, and sophisticated computer codes. Opportunities for scientific progress are greatly diminished if reproducing and building o...

  4. Thou Shalt Be Reproducible! A Technology Perspective

    PubMed Central

    Mair, Patrick

    2016-01-01

    This article elaborates on reproducibility in psychology from a technological viewpoint. Modern open source computational environments are shown and explained that foster reproducibility throughout the whole research life cycle, and to which emerging psychology researchers should be sensitized, are shown and explained. First, data archiving platforms that make datasets publicly available are presented. Second, R is advocated as the data-analytic lingua franca in psychology for achieving reproducible statistical analysis. Third, dynamic report generation environments for writing reproducible manuscripts that integrate text, data analysis, and statistical outputs such as figures and tables in a single document are described. Supplementary materials are provided in order to get the reader started with these technologies. PMID:27471486

  5. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  6. Optimal seeding of self-reproducing systems.

    PubMed

    Menezes, Amor A; Kabamba, Pierre T

    2012-01-01

    This article is motivated by the need to minimize the number of elements required to establish a self-reproducing system. One such system is a self-reproducing extraterrestrial robotic colony, which reduces the launch payload mass for space exploration compared to current mission configurations. In this work, self-reproduction is achieved by the actions of a robot on available resources. An important consideration for the establishment of any self-reproducing system is the identification of a seed, for instance, a set of resources and a set of robots that utilize them to produce all of the robots in the colony. This article outlines a novel algorithm to determine an optimal seed for self-reproducing systems, with application to a self-reproducing extraterrestrial robotic colony. Optimality is understood as the minimization of a cost function of the resources and, in this article, the robots. Since artificial self-reproduction is currently an open problem, the algorithm is illustrated with a simple robotic self-replicating system from the literature and with a more complicated self-reproducing example from nature. PMID:22035080

  7. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  8. Reproducibility of UAV-based photogrammetric surface models

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Smith, Mike; Cammeraat, Erik; Keesstra, Saskia

    2016-04-01

    Soil erosion, rapid geomorphological change and vegetation degradation are major threats to the human and natural environment in many regions. Unmanned Aerial Vehicles (UAVs) and Structure-from-Motion (SfM) photogrammetry are invaluable tools for the collection of highly detailed aerial imagery and subsequent low cost production of 3D landscapes for an assessment of landscape change. Despite the widespread use of UAVs for image acquisition in monitoring applications, the reproducibility of UAV data products has not been explored in detail. This paper investigates this reproducibility by comparing the surface models and orthophotos derived from different UAV flights that vary in flight direction and altitude. The study area is located near Lorca, Murcia, SE Spain, which is a semi-arid medium-relief locale. The area is comprised of terraced agricultural fields that have been abandoned for about 40 years and have suffered subsequent damage through piping and gully erosion. In this work we focused upon variation in cell size, vertical and horizontal accuracy, and horizontal positioning of recognizable landscape features. The results suggest that flight altitude has a significant impact on reconstructed point density and related cell size, whilst flight direction affects the spatial distribution of vertical accuracy. The horizontal positioning of landscape features is relatively consistent between the different flights. We conclude that UAV data products are suitable for monitoring campaigns for land cover purposes or geomorphological mapping, but special care is required when used for monitoring changes in elevation.

  9. Precise laser frequency scanning using frequency-synthesized optical frequency sidebands - Application to isotope shifts and hyperfine structure of mercury

    NASA Technical Reports Server (NTRS)

    Rayman, M. D.; Aminoff, C. G.; Hall, J. L.

    1989-01-01

    Based on an efficient broadband electrooptic modulator producing RF optical sidebands locked to a stable cavity, a tunable dye laser can be scanned under computer control with frequency-synthesizer precision. Cavity drift is suppressed in software by using a strong feature in the spectrum for stabilization. Mercury isotope shifts are measured with a reproducibility of about 50 kHz. This accuracy of about 1/300 of the linewidth illustrates the power of the technique. Derived hyperfine-structure constants are compared with previous atomic-beam data.

  10. Within-day and between-day Reproducibility of Baroreflex Sensitivity in Healthy Adult Males.

    PubMed

    Reynolds, L J; De Ste Croix, M; James, D V B

    2016-06-01

    Within-day and between-day reproducibility of supine and tilt baroreflex sensitivity were investigated utilising sequence and spectral indices in 46 healthy adult males employing 3 repeat measures; baseline, +60 min and +24 h. Reproducibility was assessed via the 95% limits of agreement and by the technical error of the measurement. For spectral parameters, the limits of agreement indicated same day was marginally better than between-day reproducibility. For sequence parameters, between-day had marginally better agreement than same-day reproducibility. Tilt markedly improved reproducibility across all outcome measures. Precision expressed by the technical error of the measurement for all spectral outcomes was good in both supine and tilt baroreflex sensitivity (<6%). Precision was lower, but acceptable, for sequence baroreflex sensitivity outcomes in both positions (<11%). Baroreflex sensitivity transfer gain provided the best agreement and reproducibility during supine and tilt conditions. These findings suggest time and spectral techniques may be employed to assess within-day and between-day baroreflex sensitivity changes in healthy individuals. The inclusion of a tilt manoeuvre may improve the reproducibility of the outcome measure, which may aid in the detection of modest baroreflex sensitivity changes in studies employing limited sample sizes. PMID:26928916

  11. Transparency and Reproducibility of Observational Cohort Studies Using Large Healthcare Databases.

    PubMed

    Wang, S V; Verpillat, P; Rassen, J A; Patrick, A; Garry, E M; Bartels, D B

    2016-03-01

    The scientific community and decision-makers are increasingly concerned about transparency and reproducibility of epidemiologic studies using longitudinal healthcare databases. We explored the extent to which published pharmacoepidemiologic studies using commercially available databases could be reproduced by other investigators. We identified a nonsystematic sample of 38 descriptive or comparative safety/effectiveness cohort studies. Seven studies were excluded from reproduction, five because of violation of fundamental design principles, and two because of grossly inadequate reporting. In the remaining studies, >1,000 patient characteristics and measures of association were reproduced with a high degree of accuracy (median differences between original and reproduction <2% and <0.1). An essential component of transparent and reproducible research with healthcare databases is more complete reporting of study implementation. Once reproducibility is achieved, the conversation can be elevated to assess whether suboptimal design choices led to avoidable bias and whether findings are replicable in other data sources. PMID:26690726

  12. Assessing the reproducibility of discriminant function analyses

    PubMed Central

    Andrew, Rose L.; Albert, Arianne Y.K.; Renaut, Sebastien; Rennison, Diana J.; Bock, Dan G.

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  13. The Challenge of Reproducibility and Accuracy in Nutrition Research: Resources and Pitfalls.

    PubMed

    Sorkin, Barbara C; Kuszak, Adam J; Williamson, John S; Hopp, D Craig; Betz, Joseph M

    2016-03-01

    Inconsistent and contradictory results from nutrition studies conducted by different investigators continue to emerge, in part because of the inherent variability of natural products, as well as the unknown and therefore uncontrolled variables in study populations and experimental designs. Given these challenges inherent in nutrition research, it is critical for the progress of the field that researchers strive to minimize variability within studies and enhance comparability between studies by optimizing the characterization, control, and reporting of products, reagents, and model systems used, as well as the rigor and reporting of experimental designs, protocols, and data analysis. Here we describe some recent developments relevant to research on plant-derived products used in nutrition research, highlight some resources for optimizing the characterization and reporting of research using these products, and describe some of the pitfalls that may be avoided by adherence to these recommendations. PMID:26980822

  14. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  15. Explorations in statistics: statistical facets of reproducibility.

    PubMed

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science. PMID:27231259

  16. Relevance relations for the concept of reproducibility

    PubMed Central

    Atmanspacher, H.; Bezzola Lambert, L.; Folkers, G.; Schubiger, P. A.

    2014-01-01

    The concept of reproducibility is widely considered a cornerstone of scientific methodology. However, recent problems with the reproducibility of empirical results in large-scale systems and in biomedical research have cast doubts on its universal and rigid applicability beyond the so-called basic sciences. Reproducibility is a particularly difficult issue in interdisciplinary work where the results to be reproduced typically refer to different levels of description of the system considered. In such cases, it is mandatory to distinguish between more and less relevant features, attributes or observables of the system, depending on the level at which they are described. For this reason, we propose a scheme for a general ‘relation of relevance’ between the level of complexity at which a system is considered and the granularity of its description. This relation implies relevance criteria for particular selected aspects of a system and its description, which can be operationally implemented by an interlevel relation called ‘contextual emergence’. It yields a formally sound and empirically applicable procedure to translate between descriptive levels and thus construct level-specific criteria for reproducibility in an overall consistent fashion. Relevance relations merged with contextual emergence challenge the old idea of one fundamental ontology from which everything else derives. At the same time, our proposal is specific enough to resist the backlash into a relativist patchwork of unconnected model fragments. PMID:24554574

  17. Reproducibility responsibilities in the HPC arena

    SciTech Connect

    Fahey, Mark R; McLay, Robert

    2014-01-01

    Expecting bit-for-bit reproducibility in the HPC arena is not feasible because of the ever changing hardware and software. No user s application is an island; it lives in an HPC eco-system that changes over time. Old hardware stops working and even old software won t run on new hardware. Further, software libraries change over time either by changing the internals or even interfaces. So bit-for-bit reproducibility should not be expected. Rather a reasonable expectation is that results are reproducible within error bounds; or that the answers are close (which is its own debate.) To expect a researcher to reproduce their own results or the results of others within some error bounds, there must be enough information to recreate all the details of the experiment. This requires complete documentation of all phases of the researcher s workflow; from code to versioning to programming and runtime environments to publishing of data. This argument is the core statement of the Yale 2009 Declaration on Reproducible Research [1]. Although the HPC ecosystem is often outside the researchers control, the application code could be built almost identically and there is a chance for very similar results with just only round-off error differences. To achieve complete documentation at every step, the researcher, the computing center, and the funding agencies all have a role. In this thesis, the role of the researcher is expanded upon as compared to the Yale report and the role of the computing centers is described.

  18. Reproducible measurements of MPI performance characteristics.

    SciTech Connect

    Gropp, W.; Lusk, E.

    1999-06-25

    In this paper we describe the difficulties inherent in making accurate, reproducible measurements of message-passing performance. We describe some of the mistakes often made in attempting such measurements and the consequences of such mistakes. We describe mpptest, a suite of performance measurement programs developed at Argonne National Laboratory, that attempts to avoid such mistakes and obtain reproducible measures of MPI performance that can be useful to both MPI implementers and MPI application writers. We include a number of illustrative examples of its use.

  19. The Economics of Reproducibility in Preclinical Research

    PubMed Central

    Freedman, Leonard P.; Cockburn, Iain M.; Simcoe, Timothy S.

    2015-01-01

    Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures. PMID:26057340

  20. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  1. Accurate measurements of dynamics and reproducibility in small genetic networks

    PubMed Central

    Dubuis, Julien O; Samanta, Reba; Gregor, Thomas

    2013-01-01

    Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845

  2. Robust Heterogeneous Anisotropic Elastic Network Model Precisely Reproduces the Experimental B-factors of Biomolecules.

    PubMed

    Xia, Fei; Tong, Dudu; Lu, Lanyuan

    2013-08-13

    A computational method called the progressive fluctuation matching (PFM) is developed for constructing robust heterogeneous anisotropic network models (HANMs) for biomolecular systems. An HANM derived through the PFM approach consists of harmonic springs with realistic positive force constants, and yields the calculated B-factors that are basically identical to the experimental ones. For the four tested protein systems including crambin, trypsin inhibitor, HIV-1 protease, and lysozyme, the root-mean-square deviations between the experimental and the computed B-factors are only 0.060, 0.095, 0.247, and 0.049 Å(2), respectively, and the correlation coefficients are 0.99 for all. By comparing the HANM/ANM normal modes to their counterparts derived from both an atomistic force field and an NMR structure ensemble, it is found that HANM may provide more accurate results on protein dynamics. PMID:26584122

  3. Construction concepts for precision segmented reflectors

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Withnell, Peter R.

    1993-01-01

    Three construction concepts for deployable precision segmented reflectors are presented. The designs produce reflectors with very high surface accuracies and diameters three to five times the width of the launch vehicle shroud. Of primary importance is the reliability of both the deployment process and the reflector operation. This paper is conceptual in nature, and uses these criteria to present beneficial design concepts for deployable precision segmented reflectors.

  4. High-precision arithmetic in mathematical physics

    DOE PAGESBeta

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  5. On-wafer time-dependent high reproducibility nano-force tensile testing

    NASA Astrophysics Data System (ADS)

    Bergers, L. I. J. C.; Hoefnagels, J. P. M.; Geers, M. G. D.

    2014-12-01

    Time-dependent mechanical investigations of on-wafer specimens are of interest for improving the reliability of thin metal film microdevices. This paper presents a novel methodology, addressing key challenges in creep and anelasticity investigations through on-wafer tensile tests, achieving highly reproducible force and specimen deformation measurements and loading states. The methodology consists of a novel approach for precise loading using a pin-in-hole gripper and a high-precision specimen alignment system based on three-dimensional image tracking and optical profilometry resulting in angular alignment of <0.1 mrad and near-perfect co-linearity. A compact test system enables in situ tensile tests of on-wafer specimens under light and electron microscopy. Precision force measurement over a range of 0.07 µN to 250 mN is realized based on a simple drift-compensated elastically-hinged load cell with high-precision deflection measurement. The specimen deformation measurement, compensated for drift through image tracking, yields displacement reproducibility of <6 nm. Proof of principle tensile experiments are performed on 5 µm-thick aluminum-alloy thin film specimens, demonstrating reproducible Young’s modulus measurement of 72.6 ± 3.7 GPa. Room temperature creep experiments show excellent stability of the force measurement and underline the methodology’s high reproducibility and suitability for time-dependent nano-force tensile testing of on-wafer specimens.

  6. Reproducibility, Controllability, and Optimization of Lenr Experiments

    NASA Astrophysics Data System (ADS)

    Nagel, David J.

    2006-02-01

    Low-energy nuclear reaction (LENR) measurements are significantly and increasingly reproducible. Practical control of the production of energy or materials by LENR has yet to be demonstrated. Minimization of costly inputs and maximization of desired outputs of LENR remain for future developments.

  7. Natural Disasters: Earth Science Readings. Reproducibles.

    ERIC Educational Resources Information Center

    Lobb, Nancy

    Natural Disasters is a reproducible teacher book that explains what scientists believe to be the causes of a variety of natural disasters and suggests steps that teachers and students can take to be better prepared in the event of a natural disaster. It contains both student and teacher sections. Teacher sections include vocabulary, an answer key,…

  8. Reproducibility of ambulatory blood pressure load.

    PubMed

    Zachariah, P K; Sheps, S G; Bailey, K R; Wiltgen, C M; Moore, A G

    1990-12-01

    Twenty-two hypertensive patients were monitored during two separate drug-free occasions with a Del Mar Avionics ambulatory device. Blood pressure loads (percentage of systolic and diastolic readings more than 140 and 90 mmHg, respectively) and mean BP were measured both to determine their reproducibility and to examine how they correlate with each other. The systolic and diastolic mean awake BPs for day 1 and day 2 were 140/93 mmHg and 140/91 mmHg, respectively, and BP loads were 45%/55% and 43%/54%. Moreover, mean BP loads correlated highly (r = 0.93) with mean BP values taken on the same day. Both ambulatory mean SBP and BP load were highly reproducible (r = 0.87 and 0.80, respectively, during the awake hours), and mean DBP and load were fairly reproducible (r = 0.59 and 0.39, respectively, during the awake hours). Clinically, however, both were consistent from day 1 to day 2. Mean and individual standard deviations also were reproducible for both systolic and diastolic pressures and loads. PMID:2096203

  9. ITK: enabling reproducible research and open science

    PubMed Central

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46. PMID:24600387

  10. ROCS: A reproducibility index and confidence score for interaction proteomics

    PubMed Central

    2013-01-01

    Background Affinity-Purification Mass-Spectrometry (AP-MS) provides a powerful means of identifying protein complexes and interactions. Several important challenges exist in interpreting the results of AP-MS experiments. First, the reproducibility of AP-MS experimental replicates can be low, due both to technical variability and the dynamic nature of protein interactions in the cell. Second, the identification of true protein-protein interactions in AP-MS experiments is subject to inaccuracy due to high false negative and false positive rates. Several experimental approaches can be used to mitigate these drawbacks, including the use of replicated and control experiments and relative quantification to sensitively distinguish true interacting proteins from false ones. Results To address the issues of reproducibility and accuracy of protein-protein interactions, we introduce a two-step method, called ROCS, which makes use of Indicator Proteins to select reproducible AP-MS experiments, and of Confidence Scores to select specific protein-protein interactions. The Indicator Proteins account for measures of protein identification as well as protein reproducibility, effectively allowing removal of outlier experiments that contribute noise and affect downstream inferences. The filtered set of experiments is then used in the Protein-Protein Interaction (PPI) scoring step. Prey protein scoring is done by computing a Confidence Score, which accounts for the probability of occurrence of prey proteins in the bait experiments relative to the control experiment, where the significance cutoff parameter is estimated by simultaneously controlling false positives and false negatives against metrics of false discovery rate and biological coherence respectively. In summary, the ROCS method relies on automatic objective criterions for parameter estimation and error-controlled procedures. We illustrate the performance of our method by applying it to five previously published AP

  11. Inter-examiner reproducibility of tests for lumbar motor control

    PubMed Central

    2011-01-01

    Background Many studies show a relation between reduced lumbar motor control (LMC) and low back pain (LBP). However, test circumstances vary and during test performance, subjects may change position. In other words, the reliability - i.e. reproducibility and validity - of tests for LMC should be based on quantitative data. This has not been considered before. The aim was to analyse the reproducibility of five different quantitative tests for LMC commonly used in daily clinical practice. Methods The five tests for LMC were: repositioning (RPS), sitting forward lean (SFL), sitting knee extension (SKE), and bent knee fall out (BKFO), all measured in cm, and leg lowering (LL), measured in mm Hg. A total of 40 subjects (14 males, 26 females) 25 with and 15 without LBP, with a mean age of 46.5 years (SD 14.8), were examined independently and in random order by two examiners on the same day. LBP subjects were recruited from three physiotherapy clinics with a connection to the clinic's gym or back-school. Non-LBP subjects were recruited from the clinic's staff acquaintances, and from patients without LBP. Results The means and standard deviations for each of the tests were 0.36 (0.27) cm for RPS, 1.01 (0.62) cm for SFL, 0.40 (0.29) cm for SKE, 1.07 (0.52) cm for BKFO, and 32.9 (7.1) mm Hg for LL. All five tests for LMC had reproducibility with the following ICCs: 0.90 for RPS, 0.96 for SFL, 0.96 for SKE, 0.94 for BKFO, and 0.98 for LL. Bland and Altman plots showed that most of the differences between examiners A and B were less than 0.20 cm. Conclusion These five tests for LMC displayed excellent reproducibility. However, the diagnostic accuracy of these tests needs to be addressed in larger cohorts of subjects, establishing values for the normal population. Also cut-points between subjects with and without LBP must be determined, taking into account age, level of activity, degree of impairment and participation in sports. Whether reproducibility of these tests is as good

  12. The Magsat precision vector magnetometer

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1980-01-01

    This paper examines the Magsat precision vector magnetometer which is designed to measure projections of the ambient field in three orthogonal directions. The system contains a highly stable and linear triaxial fluxgate magnetometer with a dynamic range of + or - 2000 nT (1 nT = 10 to the -9 weber per sq m). The magnetometer electronics, analog-to-digital converter, and digitally controlled current sources are implemented with redundant designs to avoid a loss of data in case of failures. Measurements are carried out with an accuracy of + or - 1 part in 64,000 in magnitude and 5 arcsec in orientation (1 arcsec = 0.00028 deg).

  13. Reproducibility study for volume estimation in MRI of the brain using the Eigenimage algorithm

    NASA Astrophysics Data System (ADS)

    Windham, Joe P.; Peck, Donald J.; Soltanian-Zadeh, Hamid

    1995-05-01

    Accurate and reproducible volume calculations are essential for diagnosis and treatment evaluation for many medical situations. Current techniques employ planimetric methods that are very time consuming to obtain reliable results. The reproducibility and accuracy of these methods depend on the user and the complexity of the volume being measured. We have reported on an algorithm for volume calculation that uses the Eigenimage filter to segment a desired feature from surrounding, interfering features. The pixel intensities of the resulting image have information pertaining to partial volume averaging effects in each voxel preserved thus providing an accurate volume calculation. Also, the amount of time required is significantly reduced, as compared to planimetric methods, and the reproducibility is less user dependent and is independent of the volume shape. In simulations and phantom studies the error in accuracy and reproducibility of this method were less than 2%. The purpose of this study was to determine the reproducibility of the method for volume calculations of the human brain. Ten volunteers were imaged and the volume of white matter, gray matter, and CSF were estimated. The time required to calculate the volume for all three tissues was approximately one minute per slice. The inter- and intra-observer reproducibility errors were less than 5% on average for all volumes calculated. These results were determined to be dependent on the proper selection of the ROIs used to define the tissue signature vectors and the non-uniformity of the MRI system.

  14. Reproducing kernel hilbert space based single infrared image super resolution

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang; Deng, Liangjian; Shen, Wei; Xi, Ning; Zhou, Zhanxin; Song, Bo; Yang, Yongliang; Cheng, Yu; Dong, Lixin

    2016-07-01

    The spatial resolution of Infrared (IR) images is limited by lens optical diffraction, sensor array pitch size and pixel dimension. In this work, a robust model is proposed to reconstruct high resolution infrared image via a single low resolution sampling, where the image features are discussed and classified as reflective, cooled emissive and uncooled emissive based on infrared irradiation source. A spline based reproducing kernel hilbert space and approximative heaviside function are deployed to model smooth part and edge component of image respectively. By adjusting the parameters of heaviside function, the proposed model can enhance distinct part of images. The experimental results show that the model is applicable on both reflective and emissive low resolution infrared images to improve thermal contrast. The overall outcome produces a high resolution IR image, which makes IR camera better measurement accuracy and observes more details at long distance.

  15. A Physical Activity Questionnaire: Reproducibility and Validity

    PubMed Central

    Barbosa, Nicolas; Sanchez, Carlos E.; Vera, Jose A.; Perez, Wilson; Thalabard, Jean-Christophe; Rieu, Michel

    2007-01-01

    This study evaluates the Quantification de L’Activite Physique en Altitude chez les Enfants (QAPACE) supervised self-administered questionnaire reproducibility and validity on the estimation of the mean daily energy expenditure (DEE) on Bogotá’s schoolchildren. The comprehension was assessed on 324 students, whereas the reproducibility was studied on a different random sample of 162 who were exposed twice to it. Reproducibility was assessed using both the Bland-Altman plot and the intra-class correlation coefficient (ICC). The validity was studied in a sample of 18 girls and 18 boys randomly selected, which completed the test - re-test study. The DEE derived from the questionnaire was compared with the laboratory measurement results of the peak oxygen uptake (Peak VO2) from ergo-spirometry and Leger Test. The reproducibility ICC was 0.96 (95% C.I. 0.95-0.97); by age categories 8-10, 0.94 (0.89-0. 97); 11-13, 0.98 (0.96- 0.99); 14-16, 0.95 (0.91-0.98). The ICC between mean TEE as estimated by the questionnaire and the direct and indirect Peak VO2 was 0.76 (0.66) (p<0.01); by age categories, 8-10, 11-13, and 14-16 were 0.89 (0.87), 0.76 (0.78) and 0.88 (0.80) respectively. The QAPACE questionnaire is reproducible and valid for estimating PA and showed a high correlation with the Peak VO2 uptake. Key pointsThe presence of a supervisor, the limited size of the group with the possibility of answering to their questions could explain the high reproducibility for this questionnaire.No study in the literature had directly addressed the issue of estimating a yearly average PA including school and vacation period.A two step procedure, in the population of schoolchildren of Bogotá, gives confidence in the use of the QAPACE questionnaire in a large epidemiological survey in related populations. PMID:24149485

  16. Precise Countersinking Tool

    NASA Technical Reports Server (NTRS)

    Jenkins, Eric S.; Smith, William N.

    1992-01-01

    Tool countersinks holes precisely with only portable drill; does not require costly machine tool. Replaceable pilot stub aligns axis of tool with centerline of hole. Ensures precise cut even with imprecise drill. Designed for relatively low cutting speeds.

  17. Reproducibility of electrochemical noise data from coated metal systems

    SciTech Connect

    Bierwagen, G.P.; Mills, D.J.; Tallman, D.E.; Skerry, B.S.

    1996-12-31

    The use of electrochemical noise (ECN) as a method to characterize the corrosion-protection properties of organic coatings on metal substrates was pioneered by Skerry and Eden, and since then has been used by others as a probe for coating metal corrosion studies. However, no statistical examination of the reproducibility of the data from such measurements has been published. In the data the authors present, they have done a systematic analysis of important experimental variables in such systems. They have examined the method for accuracy and reproducibility with respect to sample preparation, sample immersion, and metal substrate preparation. They have taken several marine coatings systems typical of US Navy use, prepared duplicate samples of coating metal systems, and examined them under the same immersion exposure. The variables they considered for reproducibility are paint application (in three-coat systems), metal panel preparation (grit-blasted steel), and immersion conditions. The authors present ECN data with respect to immersion time on the values of noise voltage standard deviation {sigma}{sub V}, noise current standard deviation {sigma}{sub I}, and the noise resistance R{sub n} as given by {sigma}{sub V}/{sigma}{sub I}. The variation among supposedly identical sample pairs in identical immersion monitored under identical conditions is presented. The statistics of the time records of the data are considered, and the variations with respect to specific coatings classes are also considered within the limits of the data. Based on these data, comments concerning ECN on coated metal systems as a predictive test method are presented along with special considerations that must be made to properly use the method for coating ranking and lifetime prediction.

  18. "Precision" drug development?

    PubMed

    Woodcock, J

    2016-02-01

    The concept of precision medicine has entered broad public consciousness, spurred by a string of targeted drug approvals, highlighted by the availability of personal gene sequences, and accompanied by some remarkable claims about the future of medicine. It is likely that precision medicines will require precision drug development programs. What might such programs look like? PMID:26331240

  19. Precision agricultural systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture is a new farming practice that has been developing since late 1980s. It has been variously referred to as precision farming, prescription farming, site-specific crop management, to name but a few. There are numerous definitions for precision agriculture, but the central concept...

  20. A meshfree unification: reproducing kernel peridynamics

    NASA Astrophysics Data System (ADS)

    Bessa, M. A.; Foster, J. T.; Belytschko, T.; Liu, Wing Kam

    2014-06-01

    This paper is the first investigation establishing the link between the meshfree state-based peridynamics method and other meshfree methods, in particular with the moving least squares reproducing kernel particle method (RKPM). It is concluded that the discretization of state-based peridynamics leads directly to an approximation of the derivatives that can be obtained from RKPM. However, state-based peridynamics obtains the same result at a significantly lower computational cost which motivates its use in large-scale computations. In light of the findings of this study, an update to the method is proposed such that the limitations regarding application of boundary conditions and the use of non-uniform grids are corrected by using the reproducing kernel approximation.

  1. The Road to Reproducibility in Animal Research.

    PubMed

    Jilka, Robert L

    2016-07-01

    Reproducibility of research findings is the hallmark of scientific advance. However, the recently noted lack of reproducibility and transparency of published research using animal models of human biology and disease has alarmed funders, scientists, and the public. Improved reporting of methodology and better use of statistical tools are needed to enhance the quality and utility of published research. Reporting guidelines like Animal Research: Reporting In Vivo Experiments (ARRIVE) have been devised to achieve these goals, but most biomedical research journals, including the JBMR, have not been able to obtain high compliance. Cooperative efforts among authors, reviewers and editors-empowered by increased awareness of their responsibilities, and enabled by user-friendly guidelines-are needed to solve this problem. © 2016 American Society for Bone and Mineral Research. PMID:27255286

  2. An International Ki67 Reproducibility Study

    PubMed Central

    2013-01-01

    Background In breast cancer, immunohistochemical assessment of proliferation using the marker Ki67 has potential use in both research and clinical management. However, lack of consistency across laboratories has limited Ki67’s value. A working group was assembled to devise a strategy to harmonize Ki67 analysis and increase scoring concordance. Toward that goal, we conducted a Ki67 reproducibility study. Methods Eight laboratories received 100 breast cancer cases arranged into 1-mm core tissue microarrays—one set stained by the participating laboratory and one set stained by the central laboratory, both using antibody MIB-1. Each laboratory scored Ki67 as percentage of positively stained invasive tumor cells using its own method. Six laboratories repeated scoring of 50 locally stained cases on 3 different days. Sources of variation were analyzed using random effects models with log2-transformed measurements. Reproducibility was quantified by intraclass correlation coefficient (ICC), and the approximate two-sided 95% confidence intervals (CIs) for the true intraclass correlation coefficients in these experiments were provided. Results Intralaboratory reproducibility was high (ICC = 0.94; 95% CI = 0.93 to 0.97). Interlaboratory reproducibility was only moderate (central staining: ICC = 0.71, 95% CI = 0.47 to 0.78; local staining: ICC = 0.59, 95% CI = 0.37 to 0.68). Geometric mean of Ki67 values for each laboratory across the 100 cases ranged 7.1% to 23.9% with central staining and 6.1% to 30.1% with local staining. Factors contributing to interlaboratory discordance included tumor region selection, counting method, and subjective assessment of staining positivity. Formal counting methods gave more consistent results than visual estimation. Conclusions Substantial variability in Ki67 scoring was observed among some of the world’s most experienced laboratories. Ki67 values and cutoffs for clinical decision-making cannot be transferred between laboratories without

  3. Reproducibility of liquid oxygen impact test results

    NASA Technical Reports Server (NTRS)

    Gayle, J. B.

    1975-01-01

    Results for 12,000 impacts on a wide range of materials were studied to determine the reproducibility of the liquid oxygen impact test method. Standard deviations representing the overall variability of results were in close agreement with the expected values for a binomial process. This indicates that the major source of variability is due to the go - no go nature of the test method and that variations due to sampling and testing operations were not significant.

  4. Tools and techniques for computational reproducibility.

    PubMed

    Piccolo, Stephen R; Frampton, Michael B

    2016-01-01

    When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches. PMID:27401684

  5. A Framework for Reproducible Latent Fingerprint Enhancements.

    PubMed

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028

  6. A Framework for Reproducible Latent Fingerprint Enhancements

    PubMed Central

    Carasso, Alfred S.

    2014-01-01

    Photoshop processing1 of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028

  7. X-ray extended-range technique for precision measurement of the X-ray mass attenuation coefficient and Im( f) for copper using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Chantler, C. T.; Tran, C. Q.; Paterson, D.; Cookson, D.; Barnea, Z.

    2001-08-01

    We reconsider the long-standing problem of accurate measurement of atomic form factors for fundamental and applied problems. We discuss the X-ray extended-range technique for accurate measurement of the mass attenuation coefficient and the imaginary component of the atomic form factor. Novelties of this approach include the use of a synchrotron with detector normalisation, the direct calibration of dominant systematics using multiple thicknesses, and measurement over wide energy ranges with a resulting improvement of accuracies by an order of magnitude. This new technique achieves accuracies of 0.27-0.5% and reproducibility of 0.02% for attenuation of copper from 8.84 to 20 keV, compared to accuracies of 10% using atomic vapours. This precision challenges available theoretical calculations. Discrepancies of 10% between current theory and experiments can now be addressed.

  8. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  9. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  10. 10 CFR 95.43 - Authority to reproduce.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... detects the unauthorized reproduction of classified documents is encouraged. (b) Unless restricted by the CSA, Secret and Confidential documents may be reproduced. Reproduced copies of classified...

  11. 10 CFR 95.43 - Authority to reproduce.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... detects the unauthorized reproduction of classified documents is encouraged. (b) Unless restricted by the CSA, Secret and Confidential documents may be reproduced. Reproduced copies of classified...

  12. 10 CFR 95.43 - Authority to reproduce.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... detects the unauthorized reproduction of classified documents is encouraged. (b) Unless restricted by the CSA, Secret and Confidential documents may be reproduced. Reproduced copies of classified...

  13. 10 CFR 95.43 - Authority to reproduce.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... detects the unauthorized reproduction of classified documents is encouraged. (b) Unless restricted by the CSA, Secret and Confidential documents may be reproduced. Reproduced copies of classified...

  14. The Seasat Precision Orbit Determination Experiment

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Born, G. H.

    1980-01-01

    The objectives and conclusions reached during the Seasat Precision Orbit Determination Experiment are discussed. It is noted that the activities of the experiment team included extensive software calibration and validation and an intense effort to validate and improve the dynamic models which describe the satellite's motion. Significant improvement in the gravitational model was obtained during the experiment, and it is pointed out that the current accuracy of the Seasat altitude ephemeris is 1.5 m rms. An altitude ephemeris for the Seasat spacecraft with an accuracy of 0.5 m rms is seen as possible with further improvements in the geopotential, atmospheric drag, and solar radiation pressure models. It is concluded that since altimetry missions with a 2-cm precision altimeter are contemplated, the precision orbit determination effort initiated under the Seasat Project must be continued and expanded.

  15. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  16. Precision optical metrology without lasers

    NASA Astrophysics Data System (ADS)

    Bergmann, Ralf B.; Burke, Jan; Falldorf, Claas

    2015-07-01

    Optical metrology is a key technique when it comes to precise and fast measurement with a resolution down to the micrometer or even nanometer regime. The choice of a particular optical metrology technique and the quality of results depends on sample parameters such as size, geometry and surface roughness as well as user requirements such as resolution, measurement time and robustness. Interferometry-based techniques are well known for their low measurement uncertainty in the nm range, but usually require careful isolation against vibration and a laser source that often needs shielding for reasons of eye-safety. In this paper, we concentrate on high precision optical metrology without lasers by using the gradient based measurement technique of deflectometry and the finite difference based technique of shear interferometry. Careful calibration of deflectometry systems allows one to investigate virtually all kinds of reflecting surfaces including aspheres or free-form surfaces with measurement uncertainties below the μm level. Computational Shear Interferometry (CoSI) allows us to combine interferometric accuracy and the possibility to use cheap and eye-safe low-brilliance light sources such as e.g. fiber coupled LEDs or even liquid crystal displays. We use CoSI e.g. for quantitative phase contrast imaging in microscopy. We highlight the advantages of both methods, discuss their transfer functions and present results on the precision of both techniques.

  17. Towards reproducible, scalable lateral molecular electronic devices

    NASA Astrophysics Data System (ADS)

    Durkan, Colm; Zhang, Qian

    2014-08-01

    An approach to reproducibly fabricate molecular electronic devices is presented. Lateral nanometer-scale gaps with high yield are formed in Au/Pd nanowires by a combination of electromigration and Joule-heating-induced thermomechanical stress. The resulting nanogap devices are used to measure the electrical properties of small numbers of two different molecular species with different end-groups, namely 1,4-butane dithiol and 1,5-diamino-2-methylpentane. Fluctuations in the current reveal that in the case of the dithiol molecule devices, individual molecules conduct intermittently, with the fluctuations becoming more pronounced at larger biases.

  18. Queer nuclear families? Reproducing and transgressing heteronormativity.

    PubMed

    Folgerø, Tor

    2008-01-01

    During the past decade the public debate on gay and lesbian adoptive rights has been extensive in the Norwegian media. The debate illustrates how women and men planning to raise children in homosexual family constellations challenge prevailing cultural norms and existing concepts of kinship and family. The article discusses how lesbian mothers and gay fathers understand and redefine their own family practices. An essential point in this article is the fundamental ambiguity in these families' accounts of themselves-how they simultaneously transgress and reproduce heteronormative assumptions about childhood, fatherhood, motherhood, family and kinship. PMID:18771116

  19. Open and reproducible global land use classification

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel; Václavík, Tomáš; Pross, Benjamin

    2015-04-01

    Researchers led by the Helmholtz Centre for Environmental research (UFZ) developed a new world map of land use systems based on over 30 diverse indicators (http://geoportal.glues.geo.tu-dresden.de/stories/landsystemarchetypes.html) of land use intensity, climate and environmental and socioeconomic factors. They identified twelve land system archetypes (LSA) using a data-driven classification algorithm (self-organizing maps) to assess global impacts of land use on the environment, and found unexpected similarities across global regions. We present how the algorithm behind this analysis can be published as an executable web process using 52°North WPS4R (https://wiki.52north.org/bin/view/Geostatistics/WPS4R) within the GLUES project (http://modul-a.nachhaltiges-landmanagement.de/en/scientific-coordination-glues/). WPS4R is an open source collaboration platform for researchers, analysts and software developers to publish R scripts (http://www.r-project.org/) as a geo-enabled OGC Web Processing Service (WPS) process. The interoperable interface to call the geoprocess allows both reproducibility of the analysis and integration of user data without knowledge about web services or classification algorithms. The open platform allows everybody to replicate the analysis in their own environments. The LSA WPS process has several input parameters, which can be changed via a simple web interface. The input parameters are used to configure both the WPS environment and the LSA algorithm itself. The encapsulation as a web process allows integration of non-public datasets, while at the same time the publication requires a well-defined documentation of the analysis. We demonstrate this platform specifically to domain scientists and show how reproducibility and open source publication of analyses can be enhanced. We also discuss future extensions of the reproducible land use classification, such as the possibility for users to enter their own areas of interest to the system and

  20. Towards reproducible, scalable lateral molecular electronic devices

    SciTech Connect

    Durkan, Colm Zhang, Qian

    2014-08-25

    An approach to reproducibly fabricate molecular electronic devices is presented. Lateral nanometer-scale gaps with high yield are formed in Au/Pd nanowires by a combination of electromigration and Joule-heating-induced thermomechanical stress. The resulting nanogap devices are used to measure the electrical properties of small numbers of two different molecular species with different end-groups, namely 1,4-butane dithiol and 1,5-diamino-2-methylpentane. Fluctuations in the current reveal that in the case of the dithiol molecule devices, individual molecules conduct intermittently, with the fluctuations becoming more pronounced at larger biases.

  1. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site-specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for it is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variable ra...

  2. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for PI/SSI is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variabl...

  3. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  4. Reproducibility Data on SUMMiT

    SciTech Connect

    Irwin, Lloyd; Jakubczak, Jay; Limary, Siv; McBrayer, John; Montague, Stephen; Smith, James; Sniegowski, Jeffry; Stewart, Harold; de Boer, Maarten

    1999-07-16

    SUMMiT (Sandia Ultra-planar Multi-level MEMS Technology) at the Sandia National Laboratories' MDL (Microelectronics Development Laboratory) is a standardized MEMS (Microelectromechanical Systems) technology that allows designers to fabricate concept prototypes. This technology provides four polysilicon layers plus three sacrificial oxide layers (with the third oxide layer being planarized) to enable fabrication of complex mechanical systems-on-a-chip. Quantified reproducibility of the SUMMiT process is important for process engineers as well as designers. Summary statistics for critical MEMS technology parameters such as film thickness, line width, and sheet resistance will be reported for the SUMMiT process. Additionally, data from Van der Pauw test structures will be presented. Data on film thickness, film uniformity and critical dimensions of etched line widths are collected from both process and monitor wafers during manufacturing using film thickness metrology tools and SEM tools. A standardized diagnostic module is included in each SWiT run to obtain post-processing parametric data to monitor run-to-run reproducibility such as Van der Pauw structures for measuring sheet resistance. This characterization of the SUMMiT process enables design for manufacturability in the SUMMiT technology.

  5. Response to Comment on "Estimating the reproducibility of psychological science".

    PubMed

    Anderson, Christopher J; Bahník, Štěpán; Barnett-Cowan, Michael; Bosco, Frank A; Chandler, Jesse; Chartier, Christopher R; Cheung, Felix; Christopherson, Cody D; Cordes, Andreas; Cremata, Edward J; Della Penna, Nicolas; Estel, Vivien; Fedor, Anna; Fitneva, Stanka A; Frank, Michael C; Grange, James A; Hartshorne, Joshua K; Hasselman, Fred; Henninger, Felix; van der Hulst, Marije; Jonas, Kai J; Lai, Calvin K; Levitan, Carmel A; Miller, Jeremy K; Moore, Katherine S; Meixner, Johannes M; Munafò, Marcus R; Neijenhuijs, Koen I; Nilsonne, Gustav; Nosek, Brian A; Plessow, Franziska; Prenoveau, Jason M; Ricker, Ashley A; Schmidt, Kathleen; Spies, Jeffrey R; Stieger, Stefan; Strohminger, Nina; Sullivan, Gavin B; van Aert, Robbie C M; van Assen, Marcel A L M; Vanpaemel, Wolf; Vianello, Michelangelo; Voracek, Martin; Zuni, Kellylynn

    2016-03-01

    Gilbert et al. conclude that evidence from the Open Science Collaboration's Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted. PMID:26941312

  6. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  7. Optical Fabrication By Precision Electroform

    NASA Astrophysics Data System (ADS)

    George, Ronald W.; Michaud, Lawrence L.

    1987-01-01

    The basic electroforming process exactly reproduces finely finished surface details from a master mold or mandrel. The process promises high potential for fabricating imaging quality optical components. This requires, however, the electrodeposition to be nearly stress free to attain accuracy within fractions of a wavelength (1.06 um) of light. Prior to this work, this level of accuracy had never been accomplished. This paper presents the advances made to the method and the process of electroforming in creating the routine production of imaging quality nickel metal mirrors. Work to date includes the electroforming of self-aligning two mirrored telescopes; the development of a large electroforming workstation to produce several mirrors simultaneously, and the development of a process for electroforming secondary mandrels. A generic process overview is presented along with opto-mechanical testing and results. Also included is a description of the general computer controlled closed loop process (Martin Marietta U.S. Patent #4,647,365 & #4,648,944). The work described was performed at Martin Marietta Corporation (Orlando) with the majority conducted under contract DAAHO1-85-C-1072 for the U.S. Army Missile Command, Redstone Arsenal, August 1985 through August 1987

  8. Reproducible and deterministic production of aspheres

    NASA Astrophysics Data System (ADS)

    Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian

    2015-10-01

    Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.

  9. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  10. Metrology study of high precision mm parts made by the deep x-ray lithography (LIGA) technique

    NASA Astrophysics Data System (ADS)

    Mäder, Olaf; Meyer, Pascal; Saile, Volker; Schulz, Joachim

    2009-02-01

    Microcomponents are increasingly applied in industrial products, e.g. smallest gears, springs or the watch industry. Apart from their small dimensions, such components are characterized by a high contour accuracy. Industry requires the tolerances to be in the µm range. Measurement of lateral dimensions in the mm range with submicrometer accuracy and precision, however, results in high requirements on measurement technology. The relevance of this problem is illustrated by the fact that the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) has launched the Collaborative Research Center 1159 on 'New Strategies of Measurement and Inspection for the Production of Microsystems and Nanostructures'. The Institut für Mikrostrukturtechnik, Karlsruhe (Institute of Microstructure Technology, Karlsruhe), produces microstructures by means of the LIG(A) technique (German acronym for lithography, electrodeposition, molding). Presently, a coordinate measurement machine equipped with an optical fiber probe to measure these microstructures is being tested. This paper will particularly focus on the precision and accuracy of the machine. The rules of measurement system analysis will be applied for this purpose. Following the elimination of the systematic error, reproducibility of deep-etch x-ray lithography will be highlighted using the LIGA production of gold gears as an example.

  11. Improving the precision matrix for precision cosmology

    NASA Astrophysics Data System (ADS)

    Paz, Dante J.; Sánchez, Ariel G.

    2015-12-01

    The estimation of cosmological constraints from observations of the large-scale structure of the Universe, such as the power spectrum or the correlation function, requires the knowledge of the inverse of the associated covariance matrix, namely the precision matrix, Ψ . In most analyses, Ψ is estimated from a limited set of mock catalogues. Depending on how many mocks are used, this estimation has an associated error which must be propagated into the final cosmological constraints. For future surveys such as Euclid and Dark Energy Spectroscopic Instrument, the control of this additional uncertainty requires a prohibitively large number of mock catalogues. In this work, we test a novel technique for the estimation of the precision matrix, the covariance tapering method, in the context of baryon acoustic oscillation measurements. Even though this technique was originally devised as a way to speed up maximum likelihood estimations, our results show that it also reduces the impact of noisy precision matrix estimates on the derived confidence intervals, without introducing biases on the target parameters. The application of this technique can help future surveys to reach their true constraining power using a significantly smaller number of mock catalogues.

  12. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  13. PSYCHOLOGY. Estimating the reproducibility of psychological science.

    PubMed

    2015-08-28

    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams. PMID:26315443

  14. Is Grannum grading of the placenta reproducible?

    NASA Astrophysics Data System (ADS)

    Moran, Mary; Ryan, John; Brennan, Patrick C.; Higgins, Mary; McAuliffe, Fionnuala M.

    2009-02-01

    Current ultrasound assessment of placental calcification relies on Grannum grading. The aim of this study was to assess if this method is reproducible by measuring inter- and intra-observer variation in grading placental images, under strictly controlled viewing conditions. Thirty placental images were acquired and digitally saved. Five experienced sonographers independently graded the images on two separate occasions. In order to eliminate any technological factors which could affect data reliability and consistency all observers reviewed images at the same time. To optimise viewing conditions ambient lighting was maintained between 25-40 lux, with monitors calibrated to the GSDF standard to ensure consistent brightness and contrast. Kappa (κ) analysis of the grades assigned was used to measure inter- and intra-observer reliability. Intra-observer agreement had a moderate mean κ-value of 0.55, with individual comparisons ranging from 0.30 to 0.86. Two images saved from the same patient, during the same scan, were each graded as I, II and III by the same observer. A mean κ-value of 0.30 (range from 0.13 to 0.55) indicated fair inter-observer agreement over the two occasions and only one image was graded consistently the same by all five observers. The study findings confirmed the lack of reproducibility associated with Grannum grading of the placenta despite optimal viewing conditions and highlight the need for new methods of assessing placental health in order to improve neonatal outcomes. Alternative methods for quantifying placental calcification such as a software based technique and 3D ultrasound assessment need to be explored.

  15. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  16. Precision Spectroscopy of Atomic Hydrogen

    NASA Astrophysics Data System (ADS)

    Beyer, A.; Parthey, Ch G.; Kolachevsky, N.; Alnis, J.; Khabarova, K.; Pohl, R.; Peters, E.; Yost, D. C.; Matveev, A.; Predehl, K.; Droste, S.; Wilken, T.; Holzwarth, R.; Hänsch, T. W.; Abgrall, M.; Rovera, D.; Salomon, Ch; Laurent, Ph; Udem, Th

    2013-12-01

    Precise determinations of transition frequencies of simple atomic systems are required for a number of fundamental applications such as tests of quantum electrodynamics (QED), the determination of fundamental constants and nuclear charge radii. The sharpest transition in atomic hydrogen occurs between the metastable 2S state and the 1S ground state. Its transition frequency has now been measured with almost 15 digits accuracy using an optical frequency comb and a cesium atomic clock as a reference [1]. A recent measurement of the 2S - 2P3/2 transition frequency in muonic hydrogen is in significant contradiction to the hydrogen data if QED calculations are assumed to be correct [2, 3]. We hope to contribute to this so-called "proton size puzzle" by providing additional experimental input from hydrogen spectroscopy.

  17. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  18. Cosputtered composition-spread reproducibility established by high-throughput x-ray fluorescence

    SciTech Connect

    Gregoire, John M.; Dale, Darren; Kazimirov, Alexander; DiSalvo, Francis J.; Dover, R. Bruce van

    2010-09-15

    We describe the characterization of sputtered yttria-zirconia composition spread thin films by x-ray fluorescence (XRF). We also discuss our automated analysis of the XRF data, which was collected in a high throughput experiment at the Cornell High Energy Synchrotron Source. The results indicate that both the composition reproducibility of the library deposition and the composition measurements have a precision of better than 1 atomic percent.

  19. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  20. Precise determination of the open ocean 234U/238U composition

    NASA Astrophysics Data System (ADS)

    Andersen, M. B.; Stirling, C. H.; Zimmermann, B.; Halliday, A. N.

    2010-12-01

    Uranium has a long residence time in the open oceans, and therefore, its salinity-normalized U concentration and 234U/238U activity ratio (expressed herein as δ234U, the ‰ deviation from secular equilibrium) are assumed to be uniform. The marine 234U/238U activity ratio is currently in radioactive disequilibrium and shows a ˜15% excess of 234U with respect to the secular equilibrium value due to continuous input from riverine sources. Knowledge of the marine δ234U, and how it has evolved through the Quaternary, is important for validating age accuracy in the U series dating of marine carbonates, which is increasingly relied upon for providing a chronological basis in paleoclimate research. However, accurate and precise measurements of δ234U are technically difficult. Thus, existing compilations of the open ocean δ234U value vary by up to ˜10‰, and the assumed uniformity in the oceanic δ234U remains to be confirmed. Using MC-ICPMS techniques and a suite of multiple Faraday cups instead of the typical configurations based on a combined Faraday cup-multiplier array, a long-term reproducibility of better than ±0.3‰ (2σ) is achieved for δ234U measurements. Applying these very high precision techniques to open ocean seawater samples, an average δ234U of 146.8 ± 0.1‰ (2σm, n = 19) is obtained. These high-precision seawater measurements yield an external reproducibility of better than ±0.4‰ (2σ) and show that the open oceans have a uniform δ234U on the sub-‰ level. These new data constrain the vertical mixing time of the open oceans to less than 1000 years.

  1. Development of a facility for high-precision irradiation of cells with carbon ions

    SciTech Connect

    Goethem, Marc-Jan van; Niemantsverdriet, Maarten; Brandenburg, Sytze; Langendijk, Johannes A.; Coppes, Robert P.; Luijk, Peter van

    2011-01-15

    the irradiation of cell samples with the specified accuracy. Measurements of the transverse and longitudinal dose distribution showed that the dose variation over the sample volume was {+-}0.8% and {+-}0.7% in the lateral and longitudinal directions, respectively. The track-averaged LET of 132{+-}10 keV/{mu}m and dose-averaged LET of 189{+-}15 keV/{mu}m at the position of the sample were obtained from a GEANT4 simulation, which was validated experimentally. Three separately measured cell-survival curves yielded nearly identical results. Conclusions: With the new facility, high-precision carbon-ion irradiations of biological samples can be performed with highly reproducible results.

  2. Empirical Bayes for Group (DCM) Studies: A Reproducibility Study

    PubMed Central

    Litvak, Vladimir; Garrido, Marta; Zeidman, Peter; Friston, Karl

    2015-01-01

    This technical note addresses some key reproducibility issues in the dynamic causal modelling of group studies of event related potentials. Specifically, we address the reproducibility of Bayesian model comparison (and inferences about model parameters) from three important perspectives namely: (i) reproducibility with independent data (obtained by averaging over odd and even trials); (ii) reproducibility over formally distinct models (namely, classic ERP and canonical microcircuit or CMC models); and (iii) reproducibility over inversion schemes (inversion of the grand average and estimation of group effects using empirical Bayes). Our hope was to illustrate the degree of reproducibility one can expect from DCM when analysing different data, under different models with different analyses. PMID:26733846

  3. Empirical Bayes for Group (DCM) Studies: A Reproducibility Study.

    PubMed

    Litvak, Vladimir; Garrido, Marta; Zeidman, Peter; Friston, Karl

    2015-01-01

    This technical note addresses some key reproducibility issues in the dynamic causal modelling of group studies of event related potentials. Specifically, we address the reproducibility of Bayesian model comparison (and inferences about model parameters) from three important perspectives namely: (i) reproducibility with independent data (obtained by averaging over odd and even trials); (ii) reproducibility over formally distinct models (namely, classic ERP and canonical microcircuit or CMC models); and (iii) reproducibility over inversion schemes (inversion of the grand average and estimation of group effects using empirical Bayes). Our hope was to illustrate the degree of reproducibility one can expect from DCM when analysing different data, under different models with different analyses. PMID:26733846

  4. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  5. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Robinson, L. B.; Wei, M. Z.; Borucki, W. J.; Dunham, E. W.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 10(exp 5). Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  6. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Dunham, E. W.; Wei, M. Z.; Robinson, L. B.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 105. Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  7. The reproducible radio outbursts of SS Cygni

    NASA Astrophysics Data System (ADS)

    Russell, T. D.; Miller-Jones, J. C. A.; Sivakoff, G. R.; Altamirano, D.; O'Brien, T. J.; Page, K. L.; Templeton, M. R.; Körding, E. G.; Knigge, C.; Rupen, M. P.; Fender, R. P.; Heinz, S.; Maitra, D.; Markoff, S.; Migliari, S.; Remillard, R. A.; Russell, D. M.; Sarazin, C. L.; Waagen, E. O.

    2016-08-01

    We present the results of our intensive radio observing campaign of the dwarf nova SS Cyg during its 2010 April outburst. We argue that the observed radio emission was produced by synchrotron emission from a transient radio jet. Comparing the radio light curves from previous and subsequent outbursts of this system (including high-resolution observations from outbursts in 2011 and 2012) shows that the typical long and short outbursts of this system exhibit reproducible radio outbursts that do not vary significantly between outbursts, which is consistent with the similarity of the observed optical, ultraviolet and X-ray light curves. Contemporaneous optical and X-ray observations show that the radio emission appears to have been triggered at the same time as the initial X-ray flare, which occurs as disk material first reaches the boundary layer. This raises the possibility that the boundary region may be involved in jet production in accreting white dwarf systems. Our high spatial resolution monitoring shows that the compact jet remained active throughout the outburst with no radio quenching.

  8. REPRODUCIBLE AND SHAREABLE QUANTIFICATIONS OF PATHOGENICITY

    PubMed Central

    Manrai, Arjun K; Wang, Brice L; Patel, Chirag J; Kohane, Isaac S

    2016-01-01

    There are now hundreds of thousands of pathogenicity assertions that relate genetic variation to disease, but most of this clinically utilized variation has no accepted quantitative disease risk estimate. Recent disease-specific studies have used control sequence data to reclassify large amounts of prior pathogenic variation, but there is a critical need to scale up both the pace and feasibility of such pathogenicity reassessments across human disease. In this manuscript we develop a shareable computational framework to quantify pathogenicity assertions. We release a reproducible “digital notebook” that integrates executable code, text annotations, and mathematical expressions in a freely accessible statistical environment. We extend previous disease-specific pathogenicity assessments to over 6,000 diseases and 160,000 assertions in the ClinVar database. Investigators can use this platform to prioritize variants for reassessment and tailor genetic model parameters (such as prevalence and heterogeneity) to expose the uncertainty underlying pathogenicity-based risk assessments. Finally, we release a website that links users to pathogenic variation for a queried disease, supporting literature, and implied disease risk calculations subject to user-defined and disease-specific genetic risk models in order to facilitate variant reassessments. PMID:26776189

  9. The reproducible radio outbursts of SS Cygni

    NASA Astrophysics Data System (ADS)

    Russell, T. D.; Miller-Jones, J. C. A.; Sivakoff, G. R.; Altamirano, D.; O'Brien, T. J.; Page, K. L.; Templeton, M. R.; Körding, E. G.; Knigge, C.; Rupen, M. P.; Fender, R. P.; Heinz, S.; Maitra, D.; Markoff, S.; Migliari, S.; Remillard, R. A.; Russell, D. M.; Sarazin, C. L.; Waagen, E. O.

    2016-08-01

    We present the results of our intensive radio observing campaign of the dwarf nova SS Cyg during its 2010 April outburst. We argue that the observed radio emission was produced by synchrotron emission from a transient radio jet. Comparing the radio light curves from previous and subsequent outbursts of this system (including high-resolution observations from outbursts in 2011 and 2012) shows that the typical long and short outbursts of this system exhibit reproducible radio outbursts that do not vary significantly between outbursts, which is consistent with the similarity of the observed optical, ultraviolet and X-ray light curves. Contemporaneous optical and X-ray observations show that the radio emission appears to have been triggered at the same time as the initial X-ray flare, which occurs as disc material first reaches the boundary layer. This raises the possibility that the boundary region may be involved in jet production in accreting white dwarf systems. Our high spatial resolution monitoring shows that the compact jet remained active throughout the outburst with no radio quenching.

  10. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  11. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  12. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  13. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  14. Consideration of shear modulus in biomechanical analysis of peri-implant jaw bone: accuracy verification using image-based multi-scale simulation.

    PubMed

    Matsunaga, Satoru; Naito, Hiroyoshi; Tamatsu, Yuichi; Takano, Naoki; Abe, Shinichi; Ide, Yoshinobu

    2013-01-01

    The aim of this study was to clarify the influence of shear modulus on the analytical accuracy in peri-implant jaw bone simulation. A 3D finite element (FE) model was prepared based on micro-CT data obtained from images of a jawbone containing implants. A precise model that closely reproduced the trabecular architecture, and equivalent models that gave shear modulus values taking the trabecular architecture into account, were prepared. Displacement norms during loading were calculated, and the displacement error was evaluated. The model that gave shear modulus values taking the trabecular architecture into account showed an analytical error of around 10-20% in the cancellous bone region, while in the model that used incorrect shear modulus, the analytical error exceeded 40% in certain regions. The shear modulus should be evaluated precisely in addition to the Young modulus when considering the mechanics of peri-implant trabecular bone structure. PMID:23719004

  15. High-precision hydraulic Stewart platform

    NASA Astrophysics Data System (ADS)

    van Silfhout, Roelof G.

    1999-08-01

    We present a novel design for a Stewart platform (or hexapod), an apparatus which performs positioning tasks with high accuracy. The platform, which is supported by six hydraulic telescopic struts, provides six degrees of freedom with 1 μm resolution. Rotations about user defined pivot points can be specified for any axis of rotation with microradian accuracy. Motion of the platform is performed by changing the strut lengths. Servo systems set and maintain the length of the struts to high precision using proportional hydraulic valves and incremental encoders. The combination of hydraulic actuators and a design which is optimized in terms of mechanical stiffness enables the platform to manipulate loads of up to 20 kN. Sophisticated software allows direct six-axis positioning including true path control. Our platform is an ideal support structure for a large variety of scientific instruments that require a stable alignment base with high-precision motion.

  16. Reproducibility of corneal astigmatism measurements with a hand held keratometer in preschool children.

    PubMed Central

    Harvey, E M; Miller, J M; Dobson, V

    1995-01-01

    AIMS--To evaluate the overall accuracy and reproducibility of the Alcon portable autokeratometer (PAK) measurements in infants and young children. METHODS--The accuracy of the Alcon PAK in measuring toric reference surfaces (1, 3, 5, and 7 D) under various suboptimal measurement conditions was assessed, and the reproducibility of PAK measurements of corneal astigmatism in newborn infants (n = 5), children (n = 19, age 3-5 years), and adults (n = 14) was evaluated. RESULTS--Measurements of toric reference surfaces indicated (a) no significant effect of distance (17-30 mm) on accuracy of measurements, (b) no systematic relation between amount of toricity and accuracy of measurements, (c) no systematic relation between angle of measurement and accuracy, (d) no difference in accuracy of measurements when the PAK is hand held in comparison with when it is mounted, (e) no difference in accuracy of measurements when axis of toricity is oriented obliquely than when it is oriented horizontally, with respect to the PAK, and (f) a small positive bias (+0.16 D) in measurement of spherical equivalent. The PAK did not prove useful for screening newborns. However, measurements were successfully obtained from 18/19 children and 14/14 adults. There was no significant difference in median measurement deviation (deviation of a subject's five measurements from his/her mean) between children (0.21 D) and adults (0.13 D). CONCLUSIONS--The PAK produces accurate measurements of surface curvature under a variety of suboptimal conditions. Variability of PAK measurements in preschool children is small enough to suggest that it would be useful for screening for corneal astigmatism in young children. PMID:8534668

  17. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  18. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    PubMed Central

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  19. Within-patient reproducibility of the aldosterone: renin ratio in primary aldosteronism.

    PubMed

    Rossi, Gian Paolo; Seccia, Teresa Maria; Palumbo, Gaetana; Belfiore, Anna; Bernini, Giampaolo; Caridi, Graziella; Desideri, Giovambattista; Fabris, Bruno; Ferri, Claudio; Giacchetti, Gilberta; Letizia, Claudio; Maccario, Mauro; Mallamaci, Francesca; Mannelli, Massimo; Patalano, Anna; Rizzoni, Damiano; Rossi, Ermanno; Pessina, Achille Cesare; Mantero, Franco

    2010-01-01

    The plasma aldosterone concentration:renin ratio (ARR) is widely used for the screening of primary aldosteronism, but its reproducibility is unknown. We, therefore, investigated the within-patient reproducibility of the ARR in a prospective multicenter study of consecutive hypertensive patients referred to specialized centers for hypertension in Italy. After the patients were carefully prepared from the pharmacological standpoint, the ARR was determined at baseline in 1136 patients and repeated after, on average, 4 weeks in the patients who had initially an ARR > or =40 and in 1 of every 4 of those with an ARR <40. The reproducibility of the ARR was assessed with Passing and Bablok and Deming regression, coefficient of reproducibility, and Bland-Altman and Mountain plots. Within-patient ARR comparison was available in 268 patients, of whom 49 had an aldosterone-producing adenoma, on the basis of the "4-corner criteria." The ARR showed a highly significant within-patient correlation (r=0.69; P<0.0001) and reproducibility. Bland-Altman plot showed no proportional, magnitude-related, or absolute systematic error between the ARR; moreover, only 7% of the values, for example, slightly more than what could be expected by chance, fell out of the 95% CI for the between-test difference. The accuracy of each ARR for pinpointing aldosterone-producing adenoma patients was approximately 80%. Thus, although it was performed under different conditions in a multicenter study, the ARR showed a good within-patient reproducibility. Hence, contrary to previously claimed poor reproducibility of the ARR, these data support its use for the screening of primary aldosteronism. PMID:19933925

  20. Precision-controlled elution of a 82Sr/82Rb generator for cardiac perfusion imaging with positron emission tomography.

    PubMed

    Klein, R; Adler, A; Beanlands, R S; Dekemp, R A

    2007-02-01

    A rubidium-82 ((82)Rb) elution system is described for use with positron emission tomography. Due to the short half-life of (82)Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a (82)Sr/(82)Rb generator and a bypass line to achieve a constant-activity elution of (82)Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The (82)Rb elution system produces accurate and reproducible constant-activity elution profiles of (82)Rb activity, independent of parent (82)Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using (82)Rb. PMID:17228112

  1. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  2. Precision Higgs Physics

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja

    2015-04-01

    The future of the high energy physics program will increasingly rely upon precision studies looking for deviations from the Standard Model. Run I of the Large Hadron Collider (LHC) triumphantly discovered the long-awaited Higgs boson, and there is great hope in the particle physics community that this new state will open a portal onto a new theory of Nature at the smallest scales. A precision study of Higgs boson properties is needed in order to test whether this belief is true. New theoretical ideas and high-precision QCD tools are crucial to fulfill this goal. They become even more important as larger data sets from LHC Run II further reduce the experimental errors and theoretical uncertainties begin to dominate. In this talk, I will review recent progress in understanding Higgs properties,including the calculation of precision predictions needed to identify possible physics beyond the Standard Model in the Higgs sector. New ideas for measuring the Higgs couplings to light quarks as well as bounding the Higgs width in a model-independent way will be discussed. Precision predictions for Higgs production in association with jets and ongoing efforts to calculate the inclusive N3LO cross section will be reviewed.

  3. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  4. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  5. Precise Indoor Localization for Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kaijaluoto, R.; Hyyppä, A.

    2015-05-01

    Accurate 3D data is of high importance for indoor modeling for various applications in construction, engineering and cultural heritage documentation. For the lack of GNSS signals hampers use of kinematic platforms indoors, TLS is currently the most accurate and precise method for collecting such a data. Due to its static single view point data collection, excessive time and data redundancy are needed for integrity and coverage of data. However, localization methods with affordable scanners are used for solving mobile platform pose problem. The aim of this study was to investigate what level of trajectory accuracies can be achieved with high quality sensors and freely available state of the art planar SLAM algorithms, and how well this trajectory translates to a point cloud collected with a secondary scanner. In this study high precision laser scanners were used with a novel way to combine the strengths of two SLAM algorithms into functional method for precise localization. We collected five datasets using Slammer platform with two laser scanners, and processed them with altogether 20 different parameter sets. The results were validated against TLS reference. The results show increasing scan frequency improves the trajectory, reaching 20 mm RMSE levels for the best performing parameter sets. Further analysis of the 3D point cloud showed good agreement with TLS reference with 17 mm positional RMSE. With precision scanners the obtained point cloud allows for high level of detail data for indoor modeling with accuracies close to TLS at best with vastly improved data collection efficiency.

  6. Ultra-precision: enabling our future.

    PubMed

    Shore, Paul; Morantz, Paul

    2012-08-28

    This paper provides a perspective on the development of ultra-precision technologies: What drove their evolution and what do they now promise for the future as we face the consequences of consumption of the Earth's finite resources? Improved application of measurement is introduced as a major enabler of mass production, and its resultant impact on wealth generation is considered. This paper identifies the ambitions of the defence, automotive and microelectronics sectors as important drivers of improved manufacturing accuracy capability and ever smaller feature creation. It then describes how science fields such as astronomy have presented significant precision engineering challenges, illustrating how these fields of science have achieved unprecedented levels of accuracy, sensitivity and sheer scale. Notwithstanding their importance to science understanding, many science-driven ultra-precision technologies became key enablers for wealth generation and other well-being issues. Specific ultra-precision machine tools important to major astronomy programmes are discussed, as well as the way in which subsequently evolved machine tools made at the beginning of the twenty-first century, now provide much wider benefits. PMID:22802499

  7. Semiautomated, Reproducible Batch Processing of Soy

    NASA Technical Reports Server (NTRS)

    Thoerne, Mary; Byford, Ivan W.; Chastain, Jack W.; Swango, Beverly E.

    2005-01-01

    A computer-controlled apparatus processes batches of soybeans into one or more of a variety of food products, under conditions that can be chosen by the user and reproduced from batch to batch. Examples of products include soy milk, tofu, okara (an insoluble protein and fiber byproduct of soy milk), and whey. Most processing steps take place without intervention by the user. This apparatus was developed for use in research on processing of soy. It is also a prototype of other soy-processing apparatuses for research, industrial, and home use. Prior soy-processing equipment includes household devices that automatically produce soy milk but do not automatically produce tofu. The designs of prior soy-processing equipment require users to manually transfer intermediate solid soy products and to press them manually and, hence, under conditions that are not consistent from batch to batch. Prior designs do not afford choices of processing conditions: Users cannot use previously developed soy-processing equipment to investigate the effects of variations of techniques used to produce soy milk (e.g., cold grinding, hot grinding, and pre-cook blanching) and of such process parameters as cooking times and temperatures, grinding times, soaking times and temperatures, rinsing conditions, and sizes of particles generated by grinding. In contrast, the present apparatus is amenable to such investigations. The apparatus (see figure) includes a processing tank and a jacketed holding or coagulation tank. The processing tank can be capped by either of two different heads and can contain either of two different insertable mesh baskets. The first head includes a grinding blade and heating elements. The second head includes an automated press piston. One mesh basket, designated the okara basket, has oblong holes with a size equivalent to about 40 mesh [40 openings per inch (.16 openings per centimeter)]. The second mesh basket, designated the tofu basket, has holes of 70 mesh [70 openings

  8. Compartmental bone morphometry in the mouse femur: reproducibility and resolution dependence of microtomographic measurements.

    PubMed

    Kohler, T; Beyeler, M; Webster, D; Müller, R

    2005-11-01

    Microcomputed tomography (microCT) is widely used for nondestructive bone phenotyping in small animals, especially in the mouse. Here, we investigated the reproducibility and resolution dependence of microCT analysis of microstructural parameters in three different compartments in the mouse femur. Reproducibility was assessed with respect to precision error (PE%CV) and intraclass correlation coefficient (ICC). We examined 14 left femurs isolated postmortem from two strains of mice (seven per group). Measurements and analyses were repeated five times on different days. In a second step, analysis was repeated again five times for a single measurement. Resolution dependence was assessed by high-resolution measurements (10 microm) in one strain and subsequent image degrading. Reproducibility was better in full bone compartment and in cortical bone compartment in the diaphysis (PE%CV = 0.06-2.16%) than in trabecular compartment in the distal metaphysis (PE(%CV) = 0.59-5.24%). Nevertheless, ICC (0.92-1.00) showed a very high reliability of the assessed parameters in all regions, indicating very small variances within repeated measurements compared to the population variances. Morphometric indices computed from lower- and higher-resolution images displayed in general only weak dependence and were highly correlated with each other (R2 = 0.91-0.99). The results show that parameters in the full and cortical compartments were very reproducible, whereas precision in the trabecular compartment was somewhat lower. Nevertheless, all compartmental analysis methods were very robust, as shown by the high ICC values, demonstrating high suitability for application in inbred strains, where highest precision is needed due to small population variances. PMID:16283571

  9. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  10. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  11. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  12. Research Reproducibility in Geosciences: Current Landscape, Practices and Perspectives

    NASA Astrophysics Data System (ADS)

    Yan, An

    2016-04-01

    Reproducibility of research can gauge the validity of its findings. Yet currently we lack understanding of how much of a problem research reproducibility is in geosciences. We developed an online survey on faculty and graduate students in geosciences, and received 136 responses from research institutions and universities in Americas, Asia, Europe and other parts of the world. This survey examined (1) the current state of research reproducibility in geosciences by asking researchers' experiences with unsuccessful replication work, and what obstacles that lead to their replication failures; (2) the current reproducibility practices in community by asking what efforts researchers made to try to reproduce other's work and make their own work reproducible, and what the underlying factors that contribute to irreproducibility are; (3) the perspectives on reproducibility by collecting researcher's thoughts and opinions on this issue. The survey result indicated that nearly 80% of respondents who had ever reproduced a published study had failed at least one time in reproducing. Only one third of the respondents received helpful feedbacks when they contacted the authors of a published study for data, code, or other information. The primary factors that lead to unsuccessful replication attempts are insufficient details of instructions in published literature, and inaccessibility of data, code and tools needed in the study. Our findings suggest a remarkable lack of research reproducibility in geoscience. Changing the incentive mechanism in academia, as well as developing policies and tools that facilitate open data and code sharing are the promising ways for geosciences community to alleviate this reproducibility problem.

  13. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  14. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  15. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  16. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  17. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  18. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  19. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  20. Measurement accuracy and Cerenkov removal for high performance, high spatial resolution scintillation dosimetry

    SciTech Connect

    Archambault, Louis; Beddar, A. Sam; Gingras, Luc

    2006-01-15

    With highly conformal radiation therapy techniques such as intensity-modulated radiation therapy, radiosurgery, and tomotherapy becoming more common in clinical practice, the use of these narrow beams requires a higher level of precision in quality assurance and dosimetry. Plastic scintillators with their water equivalence, energy independence, and dose rate linearity have been shown to possess excellent qualities that suit the most complex and demanding radiation therapy treatment plans. The primary disadvantage of plastic scintillators is the presence of Cerenkov radiation generated in the light guide, which results in an undesired stem effect. Several techniques have been proposed to minimize this effect. In this study, we compared three such techniques--background subtraction, simple filtering, and chromatic removal--in terms of reproducibility and dose accuracy as gauges of their ability to remove the Cerenkov stem effect from the dose signal. The dosimeter used in this study comprised a 6-mm{sup 3} plastic scintillating fiber probe, an optical fiber, and a color charge-coupled device camera. The whole system was shown to be linear and the total light collected by the camera was reproducible to within 0.31% for 5-s integration time. Background subtraction and chromatic removal were both found to be suitable for precise dose evaluation, with average absolute dose discrepancies of 0.52% and 0.67%, respectively, from ion chamber values. Background subtraction required two optical fibers, but chromatic removal used only one, thereby preventing possible measurement artifacts when a strong dose gradient was perpendicular to the optical fiber. Our findings showed that a plastic scintillation dosimeter could be made free of the effect of Cerenkov radiation.

  1. The effect of a range of disinfectants on the dimensional accuracy of some impression materials.

    PubMed

    Jagger, D C; Al Jabra, O; Harrison, A; Vowles, R W; McNally, L

    2004-12-01

    In this study the dimensional accuracy of two model materials; dental stone and plaster of Paris, reproduced from three commonly used impression materials; alginate, polyether and addition-cured silicone, retained by their adhesives in acrylic resin trays and exposed to four disinfectant solutions was evaluated. Ninety casts were used to investigate the effect of the four disinfectants on the dimensional accuracy of alginate, polyether and addition-cured silicone impression material. For each impression material 30 impressions were taken, half were poured in dental stone and half in plaster of Paris. The disinfectants used were Dimenol, Perform-ID, MD-520, and Haz-tabs. Measurements were carried out using a High Precision Reflex Microscope. For the alginate impressions only those disinfected by 5-minute immersion in Haz-tabs solution and in full-strength MD 520 were not adversely affected by the disinfection treatment. All polyether impressions subjected to immersion disinfection exhibited a clinically acceptable expansion. Disinfected addition-cured silicone impressions produced very accurate stone casts. Those disinfected by spraying with fill-strength Dimenol produced casts that were very similar to those left as controls, but those treated by immersion disinfection exhibited negligible and clinically acceptable expansion. The results of the studied demonstrated that the various disinfection treatments had different effects on the impression materials. It is important that an appropriate disinfectant is used for each type of impression material. PMID:15691188

  2. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  3. Precision metal molding

    NASA Technical Reports Server (NTRS)

    Townhill, A.

    1967-01-01

    Method provides precise alignment for metal-forming dies while permitting minimal thermal expansion without die warpage or cavity space restriction. The interfacing dowel bars and die side facings are arranged so the dies are restrained in one orthogonal angle and permitted to thermally expand in the opposite orthogonal angle.

  4. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  5. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  6. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  7. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  8. Aiming for benchmark accuracy with the many-body expansion.

    PubMed

    Richard, Ryan M; Lao, Ka Un; Herbert, John M

    2014-09-16

    Conspectus The past 15 years have witnessed an explosion of activity in the field of fragment-based quantum chemistry, whereby ab initio electronic structure calculations are performed on very large systems by decomposing them into a large number of relatively small subsystem calculations and then reassembling the subsystem data in order to approximate supersystem properties. Most of these methods are based, at some level, on the so-called many-body (or "n-body") expansion, which ultimately requires calculations on monomers, dimers, ..., n-mers of fragments. To the extent that a low-order n-body expansion can reproduce supersystem properties, such methods replace an intractable supersystem calculation with a large number of easily distributable subsystem calculations. This holds great promise for performing, for example, "gold standard" CCSD(T) calculations on large molecules, clusters, and condensed-phase systems. The literature is awash in a litany of fragment-based methods, each with their own working equations and terminology, which presents a formidable language barrier to the uninitiated reader. We have sought to unify these methods under a common formalism, by means of a generalized many-body expansion that provides a universal energy formula encompassing not only traditional n-body cluster expansions but also methods designed for macromolecules, in which the supersystem is decomposed into overlapping fragments. This formalism allows various fragment-based methods to be systematically classified, primarily according to how the fragments are constructed and how higher-order n-body interactions are approximated. This classification furthermore suggests systematic ways to improve the accuracy. Whereas n-body approaches have been thoroughly tested at low levels of theory in small noncovalent clusters, we have begun to explore the efficacy of these methods for large systems, with the goal of reproducing benchmark-quality calculations, ideally meaning complete

  9. Enhancing reproducibility of ultrasonic measurements by new users

    NASA Astrophysics Data System (ADS)

    Pramanik, Manojit; Gupta, Madhumita; Krishnan, Kajoli Banerjee

    2013-03-01

    Perception of operator influences ultrasound image acquisition and processing. Lower costs are attracting new users to medical ultrasound. Anticipating an increase in this trend, we conducted a study to quantify the variability in ultrasonic measurements made by novice users and identify methods to reduce it. We designed a protocol with four presets and trained four new users to scan and manually measure the head circumference of a fetal phantom with an ultrasound scanner. In the first phase, the users followed this protocol in seven distinct sessions. They then received feedback on the quality of the scans from an expert. In the second phase, two of the users repeated the entire protocol aided by visual cues provided to them during scanning. We performed off-line measurements on all the images using a fully automated algorithm capable of measuring the head circumference from fetal phantom images. The ground truth (198.1±1.6 mm) was based on sixteen scans and measurements made by an expert. Our analysis shows that: (1) the inter-observer variability of manual measurements was 5.5 mm, whereas the inter-observer variability of automated measurements was only 0.6 mm in the first phase (2) consistency of image appearance improved and mean manual measurements was 4-5 mm closer to the ground truth in the second phase (3) automated measurements were more precise, accurate and less sensitive to different presets compared to manual measurements in both phases. Our results show that visual aids and automation can bring more reproducibility to ultrasonic measurements made by new users.

  10. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  11. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2015-02-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.

  12. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2014-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale. A new approach for modelling soil erosion at large spatial scale is here proposed. It is based on the joint use of low data demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available datasets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country level statistics of pre-existing European maps of soil erosion by water is also provided.

  13. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  14. Dosimetric accuracy of Kodak EDR2 film for IMRT verifications.

    PubMed

    Childress, Nathan L; Salehpour, Mohammad; Dong, Lei; Bloch, Charles; White, R Allen; Rosen, Isaac I

    2005-02-01

    Patient-specific intensity-modulated radiotherapy (IMRT) verifications require an accurate two-dimensional dosimeter that is not labor-intensive. We assessed the precision and reproducibility of film calibrations over time, measured the elemental composition of the film, measured the intermittency effect, and measured the dosimetric accuracy and reproducibility of calibrated Kodak EDR2 film for single-beam verifications in a solid water phantom and for full-plan verifications in a Rexolite phantom. Repeated measurements of the film sensitometric curve in a single experiment yielded overall uncertainties in dose of 2.1% local and 0.8% relative to 300 cGy. 547 film calibrations over an 18-month period, exposed to a range of doses from 0 to a maximum of 240 MU or 360 MU and using 6 MV or 18 MV energies, had optical density (OD) standard deviations that were 7%-15% of their average values. This indicates that daily film calibrations are essential when EDR2 film is used to obtain absolute dose results. An elemental analysis of EDR2 film revealed that it contains 60% as much silver and 20% as much bromine as Kodak XV2 film. EDR2 film also has an unusual 1.69:1 silver:halide molar ratio, compared with the XV2 film's 1.02:1 ratio, which may affect its chemical reactions. To test EDR2's intermittency effect, the OD generated by a single 300 MU exposure was compared to the ODs generated by exposing the film 1 MU, 2 MU, and 4 MU at a time to a total of 300 MU. An ion chamber recorded the relative dose of all intermittency measurements to account for machine output variations. Using small MU bursts to expose the film resulted in delivery times of 4 to 14 minutes and lowered the film's OD by approximately 2% for both 6 and 18 MV beams. This effect may result in EDR2 film underestimating absolute doses for patient verifications that require long delivery times. After using a calibration to convert EDR2 film's OD to dose values, film measurements agreed within 2% relative

  15. Improving the accuracy of phase-shifting techniques

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Redondo-Galvan, Arturo

    2015-05-01

    The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera-projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.

  16. The R software environment in reproducible geoscientific research

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer; Nüst, Daniel; Bivand, Roger

    2012-04-01

    Reproducibility is an important aspect of scientific research, because the credibility of science is at stake when research is not reproducible. Like science, the development of good, reliable scientific software is a social process. A mature and growing community relies on the R software environment for carrying out geoscientific research. Here we describe why people use R and how it helps in communicating and reproducing research.

  17. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  18. Reproducibility of cerebral tissue oxygen saturation measurements by near-infrared spectroscopy in newborn infants

    NASA Astrophysics Data System (ADS)

    Jenny, Carmen; Biallas, Martin; Trajkovic, Ivo; Fauchère, Jean-Claude; Bucher, Hans Ulrich; Wolf, Martin

    2011-09-01

    Early detection of cerebral hypoxemia is an important aim in neonatology. A relevant parameter to assess brain oxygenation may be the cerebral tissue oxygen saturation (StO2) measured by near-infrared spectroscopy (NIRS). So far the reproducibility of StO2 measurements was too low for clinical application, probably due to inhomogeneities. The aim of this study was to test a novel sensor geometry which reduces the influence of inhomogeneities. Thirty clinically stable newborn infants, with a gestational age of median 33.9 (range 26.9 to 41.9) weeks, birth weight of 2220 (820 to 4230) g, postnatal age of 5 (1 to 71) days were studied. At least four StO2 measurements of 1 min duration were carried out using NIRS on the lateral head. The sensor was repositioned between measurements. Reproducibility was calculated by a linear mixed effects model. The mean StO2 was 79.99 +/- 4.47% with a reproducibility of 2.76% and a between-infant variability of 4.20%. Thus, the error of measurement only accounts for 30.1% of the variability. The novel sensor geometry leads to considerably more precise measurements compared to previous studies with, e.g., ~5% reproducibility for the NIRO 300. The novel StO2 values hence have a higher clinical relevance.

  19. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  20. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  1. Precision Polarization of Neutrons

    NASA Astrophysics Data System (ADS)

    Martin, Elise; Barron-Palos, Libertad; Couture, Aaron; Crawford, Christopher; Chupp, Tim; Danagoulian, Areg; Estes, Mary; Hona, Binita; Jones, Gordon; Klein, Andi; Penttila, Seppo; Sharma, Monisha; Wilburn, Scott

    2009-05-01

    Determining polarization of a cold neutron beam to high precision is required for the next generation neutron decay correlation experiments at the SNS, such as the proposed abBA and PANDA experiments. Precision polarimetry measurements were conducted at Los Alamos National Laboratory with the goal of determining the beam polarization to the level of 10-3 or better. The cold neutrons from FP12 were polarized using optically polarized ^3He gas as a spin filter, which has a highly spin-dependent absorption cross section. A second ^ 3He spin filter was used to analyze the neutron polarization after passing through a resonant RF spin rotator. A discussion of the experiment and results will be given.

  2. A passion for precision

    ScienceCinema

    None

    2011-10-06

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  3. Virtual Reference Environments: a simple way to make research reproducible

    PubMed Central

    Hurley, Daniel G.; Budden, David M.

    2015-01-01

    Reproducible research’ has received increasing attention over the past few years as bioinformatics and computational biology methodologies become more complex. Although reproducible research is progressing in several valuable ways, we suggest that recent increases in internet bandwidth and disk space, along with the availability of open-source and free-software licences for tools, enable another simple step to make research reproducible. In this article, we urge the creation of minimal virtual reference environments implementing all the tools necessary to reproduce a result, as a standard part of publication. We address potential problems with this approach, and show an example environment from our own work. PMID:25433467

  4. Precision disablement aiming system

    DOEpatents

    Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott

    2016-02-16

    A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.

  5. Precise linear sun sensor

    NASA Technical Reports Server (NTRS)

    Johnston, D. D.

    1972-01-01

    An evaluation of the precise linear sun sensor relating to future mission applications was performed. The test procedures, data, and results of the dual-axis, solid-state system are included. Brief descriptions of the sensing head and of the system's operational characteristics are presented. A unique feature of the system is that multiple sensor heads with various fields of view may be used with the same electronics.

  6. Precision laser aiming system

    SciTech Connect

    Ahrens, Brandon R.; Todd, Steven N.

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  7. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  8. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer

    PubMed Central

    KONDO, Kosuke; NEMOTO, Masaaki; MASUDA, Hiroyuki; OKONOGI, Shinichi; NOMOTO, Jun; HARADA, Naoyuki; SUGO, Nobuo; MIYAZAKI, Chikao

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p < 0.001). A significant difference was also noted in the longitudinal diameter of the cerebral aneurysm (p < 0.01). Regarding the CTA image as the gold standard, reproducibility of the microsurgical anatomy of skull bone and main arteries was favorable in the rapid prototyping models prepared using a 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors. PMID:26119896

  9. Indirect monitoring shot-to-shot shock waves strength reproducibility during pump-probe experiments

    NASA Astrophysics Data System (ADS)

    Pikuz, T. A.; Faenov, A. Ya.; Ozaki, N.; Hartley, N. J.; Albertazzi, B.; Matsuoka, T.; Takahashi, K.; Habara, H.; Tange, Y.; Matsuyama, S.; Yamauchi, K.; Ochante, R.; Sueda, K.; Sakata, O.; Sekine, T.; Sato, T.; Umeda, Y.; Inubushi, Y.; Yabuuchi, T.; Togashi, T.; Katayama, T.; Yabashi, M.; Harmand, M.; Morard, G.; Koenig, M.; Zhakhovsky, V.; Inogamov, N.; Safronova, A. S.; Stafford, A.; Skobelev, I. Yu.; Pikuz, S. A.; Okuchi, T.; Seto, Y.; Tanaka, K. A.; Ishikawa, T.; Kodama, R.

    2016-07-01

    We present an indirect method of estimating the strength of a shock wave, allowing on line monitoring of its reproducibility in each laser shot. This method is based on a shot-to-shot measurement of the X-ray emission from the ablated plasma by a high resolution, spatially resolved focusing spectrometer. An optical pump laser with energy of 1.0 J and pulse duration of ˜660 ps was used to irradiate solid targets or foils with various thicknesses containing Oxygen, Aluminum, Iron, and Tantalum. The high sensitivity and resolving power of the X-ray spectrometer allowed spectra to be obtained on each laser shot and to control fluctuations of the spectral intensity emitted by different plasmas with an accuracy of ˜2%, implying an accuracy in the derived electron plasma temperature of 5%-10% in pump-probe high energy density science experiments. At nano- and sub-nanosecond duration of laser pulse with relatively low laser intensities and ratio Z/A ˜ 0.5, the electron temperature follows Te ˜ Ilas2/3. Thus, measurements of the electron plasma temperature allow indirect estimation of the laser flux on the target and control its shot-to-shot fluctuation. Knowing the laser flux intensity and its fluctuation gives us the possibility of monitoring shot-to-shot reproducibility of shock wave strength generation with high accuracy.

  10. Precise Point Positioning in the Airborne Mode

    NASA Astrophysics Data System (ADS)

    El-Mowafy, Ahmed

    2011-01-01

    The Global Positioning System (GPS) is widely used for positioning in the airborne mode such as in navigation as a supplementary system and for geo-referencing of cameras in mapping and surveillance by aircrafts and Unmanned Aerial Vehicles (UAV). The Precise Point Positioning (PPP) approach is an attractive positioning approach based on processing of un-differenced observations from a single GPS receiver. It employs precise satellite orbits and satellite clock corrections. These data can be obtained via the internet from several sources, e.g. the International GNSS Service (IGS). The data can also broadcast from satellites, such as via the LEX signal of the new Japanese satellite system QZSS. The PPP can achieve positioning precision and accuracy at the sub-decimetre level. In this paper, the functional and stochastic mathematical modelling used in PPP is discussed. Results of applying the PPP method in an airborne test using a small fixed-wing aircraft are presented. To evaluate the performance of the PPP approach, a reference trajectory was established by differential positioning of the same GPS observations with data from a ground reference station. The coordinate results from the two approaches, PPP and differential positioning, were compared and statistically evaluated. For the test at hand, positioning accuracy at the cm-to-decimetre was achieved for latitude and longitude coordinates and doubles that value for height estimation.

  11. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  12. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  13. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  14. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  15. [Histologic classification of lung cancers and reproducibility of diagnoses based on 10 years' autopsy material].

    PubMed

    Károlyi, P

    1989-06-11

    Reviewing autopsy records of a ten-year period in the Department of Pathology of Szolnok County Hospital 1607 lung cancer cases were detected, in 1213 of which histological reexamination could be performed. The reproducibility of main histological groups was 73.4%, highest of all in small cell lung cancer. The cause of the relatively low reproducibility rate can be first of all the considerably changeable histological appearance and the frequency of transitional forms between terminally differentiated tumor types. One must not even leave lower diagnostic accuracy of frozen sections out of consideration, the basic method of first diagnosis. The light microscopic heterogeneity and transitional histological forms have been analysed in this article. PMID:2671857

  16. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  17. Precision Pointing Control System (PPCS) star tracker test

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Tests performed on the TRW precision star tracker are described. The unit tested was a two-axis gimballed star tracker designed to provide star LOS data to an accuracy of 1 to 2 sec. The tracker features a unique bearing system and utilizes thermal and mechanical symmetry techniques to achieve high precision which can be demonstrated in a one g environment. The test program included a laboratory evaluation of tracker functional operation, sensitivity, repeatibility, and thermal stability.

  18. 46 CFR 56.30-3 - Piping joints (reproduces 110).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 2 2014-10-01 2014-10-01 false Piping joints (reproduces 110). 56.30-3 Section 56.30-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PIPING SYSTEMS AND APPURTENANCES Selection and Limitations of Piping Joints § 56.30-3 Piping joints (reproduces 110). The type of piping joint used shall be...

  19. 10 CFR 1016.35 - Authority to reproduce Restricted Data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Authority to reproduce Restricted Data. 1016.35 Section 1016.35 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Control of Information § 1016.35 Authority to reproduce Restricted Data. Secret Restricted Data will not be...

  20. An Open Science and Reproducible Research Primer for Landscape Ecologists

    EPA Science Inventory

    In recent years many funding agencies, some publishers, and even the United States government have enacted policies that encourage open science and strive for reproducibility; however, the knowledge and skills to implement open science and enable reproducible research are not yet...

  1. Reproducibility of wrist home blood pressure measurement with position sensor and automatic data storage

    PubMed Central

    Uen, Sakir; Fimmers, Rolf; Brieger, Miriam; Nickenig, Georg; Mengden, Thomas

    2009-01-01

    Background Wrist blood pressure (BP) devices have physiological limits with regards to accuracy, therefore they were not preferred for home BP monitoring. However some wrist devices have been successfully validated using etablished validation protocols. Therefore this study assessed the reproducibility of wrist home BP measurement with position sensor and automatic data storage. Methods To compare the reproducibility of three different(BP) measurement methods: 1) office BP, 2) home BP (Omron wrist device HEM- 637 IT with position sensor), 3) 24-hour ambulatory BP(24-h ABPM) (ABPM-04, Meditech, Hun)conventional sphygmomanometric office BP was measured on study days 1 and 7, 24-h ABPM on study days 7 and 14 and home BP between study days 1 and 7 and between study days 8 and 14 in 69 hypertensive and 28 normotensive subjects. The correlation coeffcient of each BP measurement method with echocardiographic left ventricular mass index was analyzed. The schedule of home readings was performed according to recently published European Society of Hypertension (ESH)- guidelines. Results The reproducibility of home BP measurement analyzed by the standard deviation as well as the squared differeces of mean individual differences between the respective BP measurements was significantly higher than the reproducibility of office BP (p < 0.001 for systolic and diastolic BP) and the reproducibility of 24-h ABPM (p < 0.001 systolic BP, p = 0.127 diastolic BP). The reproducibility of systolic and diastolic office versus 24-h ABPM was not significantly different (p = 0.80 systolic BP, p = 0.1 diastolic BP). The correlation coefficient of 24-h ABMP (r = 0.52) with left ventricular mass index was significantly higher than with office BP (r = 0.31). The difference between 24-h ABPM and home BP (r = 0.46) was not significant. Conclusion The short-term reproducibility of home BP measurement with the Omron HEM-637 IT wrist device was superior to the reproducibility of office BP and 24- h

  2. An open investigation of the reproducibility of cancer biology research.

    PubMed

    Errington, Timothy M; Iorns, Elizabeth; Gunn, William; Tan, Fraser Elisabeth; Lomax, Joelle; Nosek, Brian A

    2014-01-01

    It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility. PMID:25490932

  3. An open investigation of the reproducibility of cancer biology research

    PubMed Central

    Errington, Timothy M; Iorns, Elizabeth; Gunn, William; Tan, Fraser Elisabeth; Lomax, Joelle; Nosek, Brian A

    2014-01-01

    It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility. DOI: http://dx.doi.org/10.7554/eLife.04333.001 PMID:25490932

  4. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  5. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2010-09-01

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  6. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  7. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  8. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  9. Density Variations Observable by Precision Satellite Orbits

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T.; Hiatt, A.

    2008-12-01

    This research uses precision satellite orbits from the Challenging Minisatellite Payload (CHAMP) satellite to produce a new data source for studying density changes that occur on time scales less than a day. Precision orbit derived density is compared to accelerometer derived density. In addition, the precision orbit derived densities are used to examine density variations that have been observed with accelerometer data to see if they are observable. In particular, the research will examine the observability of geomagnetic storm time changes and polar cusp features that have been observed in accelerometer data. Currently highly accurate density data is available from three satellites with accelerometers and much lower accuracy data is available from hundreds of satellites for which two-line element sets are available from the Air Force. This paper explores a new data source that is more accurate and has better temporal resolution than the two-line element sets, and provides better spatial coverage than satellites with accelerometers. This data source will be valuable for studying atmospheric phenomena over short periods, for long term studies of the atmosphere, and for validating and improving complex coupled models that include neutral density. The precision orbit derived densities are very similar to the accelerometer derived densities, but the accelerometer can observe features with shorter temporal variations. This research will quantify the time scales observable by precision orbit derived density. The technique for estimating density is optimal orbit determination. The estimates are optimal in the least squares or minimum variance sense. Precision orbit data from CHAMP is used as measurements in a sequential measurement processing and filtering scheme. The atmospheric density is estimated as a correction to an atmospheric model.

  10. Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli

    PubMed Central

    Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy

    2016-01-01

    We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966

  11. Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli.

    PubMed

    Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy

    2016-01-01

    We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966

  12. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    DOE PAGESBeta

    Bates, C. R.; Pies, C.; Kempf, S.; Hengstler, D.; Fleischmann, A.; Gastaldo, L.; Enss, C.; Friedrich, S.

    2016-07-15

    Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  13. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    NASA Astrophysics Data System (ADS)

    Bates, C. R.; Pies, C.; Kempf, S.; Hengstler, D.; Fleischmann, A.; Gastaldo, L.; Enss, C.; Friedrich, S.

    2016-07-01

    We describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  14. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  15. High precision kinematic surveying with laser scanners

    NASA Astrophysics Data System (ADS)

    Gräfe, Gunnar

    2007-12-01

    The kinematic survey of roads and railways is becoming a much more common data acquisition method. The development of the Mobile Road Mapping System (MoSES) has reached a level that allows the use of kinematic survey technology for high precision applications. The system is equipped with cameras and laser scanners. For high accuracy requirements, the scanners become the main sensor group because of their geometric precision and reliability. To guarantee reliable survey results, specific calibration procedures have to be applied, which can be divided into the scanner sensor calibration as step 1, and the geometric transformation parameter estimation with respect to the vehicle coordinate system as step 2. Both calibration steps include new methods for sensor behavior modeling and multisensor system integration. To verify laser scanner quality of the MoSES system, the results are regularly checked along different test routes. It can be proved that a standard deviation of 0.004 m for height of the scanner points will be obtained, if the specific calibrations and data processing methods are applied. This level of accuracy opens new possibilities to serve engineering survey applications using kinematic measurement techniques. The key feature of scanner technology is the full digital coverage of the road area. Three application examples illustrate the capabilities. Digital road surface models generated from MoSES data are used, especially for road surface reconstruction tasks along highways. Compared to static surveys, the method offers comparable accuracy at higher speed, lower costs, much higher grid resolution and with greater safety. The system's capability of gaining 360 profiles leads to other complex applications like kinematic tunnel surveys or the precise analysis of bridge clearances.

  16. Precision estimates for tomographic nondestructive assay

    SciTech Connect

    Prettyman, T.H.

    1995-12-31

    One technique being applied to improve the accuracy of assays of waste in large containers is computerized tomography (CT). Research on the application of CT to improve both neutron and gamma-ray assays of waste is being carried out at LANL. For example, tomographic gamma scanning (TGS) is a single-photon emission CT technique that corrects for the attenuation of gamma rays emitted from the sample using attenuation images from transmission CT. By accounting for the distribution of emitting material and correcting for the attenuation of the emitted gamma rays, TGS is able to achieve highly accurate assays of radionuclides in medium-density wastes. It is important to develope methods to estimate the precision of such assays, and this paper explores this problem by examining the precision estimators for TGS.

  17. Repeatability and Reproducibility of Goldmann Applanation, Dynamic Contour and Ocular Response Analyzer Tonometry

    PubMed Central

    Wang, Allen S.; Alencar, Luciana M.; Weinreb, Robert N.; Tafreshi, Ali; Deokule, Sunil; Vizzeri, Gianmarco; Medeiros, Felipe A.

    2011-01-01

    PURPOSE To evaluate the repeatability and inter-operator reproducibility of the Pascal dynamic contour tonometry (DCT), Ocular Response Analyzer (ORA) and Goldmann applanation tonometer(GAT) in a single population of normal subjects. METHODS The study included fifty-two eyes from 26 normal subjects. One operator measured the intraocular pressure (IOP) with each tonometer three times while two additional operators each measured the IOP with each tonometer once. Repeatability and reproducibility were assessed by the coefficient of Variation (CV) and Intraclass Correlation Coefficient (ICC). Agreement among tonometers was also assessed using Bland-Altman plots. RESULTS The mean age of included subjects was 31.5 ±8.8 years and 15 (58%) were female. In general, both intra-operator repeatability and inter-operator reproducibility were significantly higher for DCT compared to the other tonometers. Intra-operator DCT (CV = 3.7, ICC = 0.89), GAT (CV = 9.7, ICC = 0.79), IOPg (CV = 7.0, ICC = 0.79) and IOPcc (CV = 9.8, ICC = 0.57). Inter-operator DCT (CV=6.1, ICC = 0.73), GAT (CV=9.0, ICC=0.82) and IOPg (CV=10.8, ICC = 0.63), IOPcc (CV=11.7, ICC = 0.49) CONCLUSION Overall, DCT was significantly more repeatable and reproducible than GAT, IOPg and IOPcc. The better reproducibility of the DCT may result in more precise measurements for monitoring intraocular pressure changes over time compared to GAT and ORA. PMID:21701395

  18. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  19. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  20. Precision muon physics

    NASA Astrophysics Data System (ADS)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  1. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  2. Progressive Precision Surface Design

    SciTech Connect

    Duchaineau, M; Joy, KJ

    2002-01-11

    We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on an underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.

  3. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  4. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  5. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  6. Visual inspection reliability for precision manufactured parts

    DOE PAGESBeta

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  7. Digital image centering. I. [for precision astrometry

    NASA Technical Reports Server (NTRS)

    Van Altena, W. F.; Auer, L. H.

    1975-01-01

    A series of parallax plates have been measured on a PDS microdensitometer to assess the possibility of using the PDS for precision relative astrometry and to investigate centering algorithms that might be used to analyze digital images obtained with the Large Space Telescope. The basic repeatability of the PDS is found to be plus or minus 0.6 micron, with the potential for reaching plus or minus 0.2 micron. A very efficient centering algorithm has been developed which fits the marginal density distributions of the image with a Gaussian profile and a sloping background. The accuracy is comparable with the best results obtained with a photoelectric image bisector.

  8. Precise and automated microfluidic sample preparation.

    SciTech Connect

    Crocker, Robert W.; Patel, Kamlesh D.; Mosier, Bruce P.; Harnett, Cindy K.

    2004-07-01

    Autonomous bio-chemical agent detectors require sample preparation involving multiplex fluid control. We have developed a portable microfluidic pump array for metering sub-microliter volumes at flowrates of 1-100 {micro}L/min. Each pump is composed of an electrokinetic (EK) pump and high-voltage power supply with 15-Hz feedback from flow sensors. The combination of high pump fluid impedance and active control results in precise fluid metering with nanoliter accuracy. Automated sample preparation will be demonstrated by labeling proteins with fluorescamine and subsequent injection to a capillary gel electrophoresis (CGE) chip.

  9. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  10. Reproducibility of Frankfort Horizontal Plane on 3D Multi-Planar Reconstructed MR Images

    PubMed Central

    Daboul, Amro; Schwahn, Christian; Schaffner, Grit; Soehnel, Silvia; Samietz, Stefanie; Aljaghsi, Ahmad; Habes, Mohammad; Hegenscheid, Katrin; Puls, Ralf; Klinke, Thomas; Biffar, Reiner

    2012-01-01

    Objective The purpose of this study was to determine the accuracy and reliability of Frankfort horizontal plane identification using displays of multi-planar reconstructed MRI images, and propose it as a sufficiently stable and standardized reference plane for craniofacial structures. Materials and Methods MRI images of 43 subjects were obtained from the longitudinal population based cohort study SHIP-2 using a T1-weighted 3D sequence. Five examiners independently identified the three landmarks that form FH plane. Intra-examiner reproducibility and inter-examiner reliability, correlation coefficients (ICC), coefficient of variability and Bland-Altman plots were obtained for all landmarks coordinates to assess reproducibility. Intra-examiner reproducibility and inter-examiner reliability in terms of location and plane angulation were also assessed. Results Intra- and inter-examiner reliabilities for X, Y and Z coordinates of all three landmarks were excellent with ICC values ranging from 0.914 to 0.998. Differences among examiners were more in X and Z than in Y dimensions. The Bland–Altman analysis demonstrated excellent intra- as well as inter-examiner agreement between examiners in all coordinates for all landmarks. Intra-examiner reproducibility and inter-examiner reliability of the three landmarks in terms of distance showed mean differences between 1.3 to 2.9 mm, Mean differences in plane angulation were between 1.0° to 1.5° among examiners. Conclusion This study revealed excellent intra-examiner reproducibility and inter-examiner reliability of Frankfort Horizontal plane through 3D landmark identification in MRI. Sufficiently stable landmark-based reference plane could be used for different treatments and studies. PMID:23118970

  11. The Effect of Strength Training on Fractionalized Accuracy.

    ERIC Educational Resources Information Center

    Gronbech, C. Eric

    The role of the strength factor in the accomplishment of precision tasks was investigated. Forty adult males weight trained to develop physical strength in several muscle groups, particularly in the elbow flexor area. Results indicate a decrease in incidence of accuracy concurrent with an increase in muscle strength. This suggests that in order to…

  12. Precise Orbit Determination for Altimeter Satellites

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Lemoine, F. G.; Beckley, B. B.; Wang, Y.; Chinn, D. S.

    2002-05-01

    Orbit error remains a critical component in the error budget for all radar altimeter missions. This paper describes the ongoing work at GSFC to improve orbits for three radar altimeter satellites: TOPEX/POSEIDON (T/P), Jason, and Geosat Follow-On (GFO). T/P has demonstrated that, the time variation of ocean topography can be determined with an accuracy of a few centimeters, thanks to the availability of highly accurate orbits (2-3 cm radially) produced at GSFC. Jason, the T/P follow-on, is intended to continue measurement of the ocean surface with the same, if not better accuracy. Reaching the Jason centimeter accuracy orbit goal would greatly benefit the knowledge of ocean circulation. Several new POD strategies which promise significant improvement to the current T/P orbit are evaluated over one year of data. Also, preliminary, but very promising Jason POD results are presented. Orbit improvement for GFO has been dramatic, and has allowed this mission to provide a POESEIDON class altimeter product. The GFO Precise Orbit Ephemeris (POE) orbits are based on satellite laser ranging (SLR) tracking supplemented with GFO/GFO altimeter crossover data. The accuracy of these orbits were evaluated using several tests, including independent TOPEX/GFO altimeter crossover data. The orbit improvements are shown over the years 2000 and 2001 for which the POEs have been completed.

  13. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  14. Submicron accuracy optimization for laser beam soldering processes

    NASA Astrophysics Data System (ADS)

    Beckert, Erik; Burkhardt, Thomas; Hornaff, Marcel; Kamm, Andreas; Scheidig, Ingo; Stiehl, Cornelia; Eberhardt, Ramona; Tünnermann, Andreas

    2010-02-01

    Laser beam soldering is a packaging technology alternative to polymeric adhesive bonding in terms of stability and functionality. Nevertheless, when packaging especially micro optical and MOEMS systems this technology has to fulfil stringent requirements for accuracy in the micron and submicron range. Investigating the assembly of several laser optical systems it has been shown that micron accuracy and submicron reproducibility can be reached when using design-of-experiment optimized solder processes that are based on applying liquid solder drops ("Solder Bumping") onto wettable metalized joining surfaces of optical components. The soldered assemblies were subject to thermal cycles and vibration/ shock test also.

  15. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  16. Reproducible and controllable induction voltage adder for scaled beam experiments.

    PubMed

    Sakai, Yasuo; Nakajima, Mitsuo; Horioka, Kazuhiko

    2016-08-01

    A reproducible and controllable induction adder was developed using solid-state switching devices and Finemet cores for scaled beam compression experiments. A gate controlled MOSFET circuit was developed for the controllable voltage driver. The MOSFET circuit drove the induction adder at low magnetization levels of the cores which enabled us to form reproducible modulation voltages with jitter less than 0.3 ns. Preliminary beam compression experiments indicated that the induction adder can improve the reproducibility of modulation voltages and advance the beam physics experiments. PMID:27587112

  17. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  18. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  19. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively. PMID:21033925

  20. Precision Neutron Polarimetry

    NASA Astrophysics Data System (ADS)

    Sharma, Monisha; Barron-Palos, L.; Bowman, J. D.; Chupp, T. E.; Crawford, C.; Danagoulian, A.; Klein, A.; Penttila, S. I.; Salas-Bacci, A. F.; Wilburn, W. S.

    2008-04-01

    Proposed PANDA and abBA experiments aim to measure the correlation coefficients in the polarized neutron beta decay at the SNS. The goal of these experiments is 0.1% measurement which will require neutron polarimetry at 0.1% level. The FnPB neutron beam will be polarized either using a ^3He spin filter or a supermirror polarizer and the neutron polarization will be measured using a ^3He spin filter. Experiment to establish the accuracy to which neutron polarization can be determined using ^3He spin fliters was performed at Los Alamos National Laboratory in Summer 2007 and the analysis is in progress. The details of the experiment and the results will be presented.