Science.gov

Sample records for accuracy precision reproducibility

  1. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  2. Accuracy and precision of manual baseline determination.

    PubMed

    Jirasek, A; Schulze, G; Yu, M M L; Blades, M W; Turner, R F B

    2004-12-01

    Vibrational spectra often require baseline removal before further data analysis can be performed. Manual (i.e., user) baseline determination and removal is a common technique used to perform this operation. Currently, little data exists that details the accuracy and precision that can be expected with manual baseline removal techniques. This study addresses this current lack of data. One hundred spectra of varying signal-to-noise ratio (SNR), signal-to-baseline ratio (SBR), baseline slope, and spectral congestion were constructed and baselines were subtracted by 16 volunteers who were categorized as being either experienced or inexperienced in baseline determination. In total, 285 baseline determinations were performed. The general level of accuracy and precision that can be expected for manually determined baselines from spectra of varying SNR, SBR, baseline slope, and spectral congestion is established. Furthermore, the effects of user experience on the accuracy and precision of baseline determination is estimated. The interactions between the above factors in affecting the accuracy and precision of baseline determination is highlighted. Where possible, the functional relationships between accuracy, precision, and the given spectral characteristic are detailed. The results provide users of manual baseline determination useful guidelines in establishing limits of accuracy and precision when performing manual baseline determination, as well as highlighting conditions that confound the accuracy and precision of manual baseline determination.

  3. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  4. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  5. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  6. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  7. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  8. Accuracy and reproducibility of bending stiffness measurements by mechanical response tissue analysis in artificial human ulnas.

    PubMed

    Arnold, Patricia A; Ellerbrock, Emily R; Bowman, Lyn; Loucks, Anne B

    2014-11-01

    Osteoporosis is characterized by reduced bone strength, but no FDA-approved medical device measures bone strength. Bone strength is strongly associated with bone stiffness, but no FDA-approved medical device measures bone stiffness either. Mechanical Response Tissue Analysis (MRTA) is a non-significant risk, non-invasive, radiation-free, vibration analysis technique for making immediate, direct functional measurements of the bending stiffness of long bones in humans in vivo. MRTA has been used for research purposes for more than 20 years, but little has been published about its accuracy. To begin to investigate its accuracy, we compared MRTA measurements of bending stiffness in 39 artificial human ulna bones to measurements made by Quasistatic Mechanical Testing (QMT). In the process, we also quantified the reproducibility (i.e., precision and repeatability) of both methods. MRTA precision (1.0±1.0%) and repeatability (3.1 ± 3.1%) were not as high as those of QMT (0.2 ± 0.2% and 1.3+1.7%, respectively; both p<10(-4)). The relationship between MRTA and QMT measurements of ulna bending stiffness was indistinguishable from the identity line (p=0.44) and paired measurements by the two methods agreed within a 95% confidence interval of ± 5%. If such accuracy can be achieved on real human ulnas in situ, and if the ulna is representative of the appendicular skeleton, MRTA may prove clinically useful.

  9. Accuracy and reproducibility of bending stiffness measurements by mechanical response tissue analysis in artificial human ulnas.

    PubMed

    Arnold, Patricia A; Ellerbrock, Emily R; Bowman, Lyn; Loucks, Anne B

    2014-11-01

    Osteoporosis is characterized by reduced bone strength, but no FDA-approved medical device measures bone strength. Bone strength is strongly associated with bone stiffness, but no FDA-approved medical device measures bone stiffness either. Mechanical Response Tissue Analysis (MRTA) is a non-significant risk, non-invasive, radiation-free, vibration analysis technique for making immediate, direct functional measurements of the bending stiffness of long bones in humans in vivo. MRTA has been used for research purposes for more than 20 years, but little has been published about its accuracy. To begin to investigate its accuracy, we compared MRTA measurements of bending stiffness in 39 artificial human ulna bones to measurements made by Quasistatic Mechanical Testing (QMT). In the process, we also quantified the reproducibility (i.e., precision and repeatability) of both methods. MRTA precision (1.0±1.0%) and repeatability (3.1 ± 3.1%) were not as high as those of QMT (0.2 ± 0.2% and 1.3+1.7%, respectively; both p<10(-4)). The relationship between MRTA and QMT measurements of ulna bending stiffness was indistinguishable from the identity line (p=0.44) and paired measurements by the two methods agreed within a 95% confidence interval of ± 5%. If such accuracy can be achieved on real human ulnas in situ, and if the ulna is representative of the appendicular skeleton, MRTA may prove clinically useful. PMID:25261885

  10. Periotest values: Its reproducibility, accuracy, and variability with hormonal influence

    PubMed Central

    Chakrapani, Swarna; Goutham, Madireddy; Krishnamohan, Thota; Anuparthy, Sujitha; Tadiboina, Nagarjuna; Rambha, Somasekhar

    2015-01-01

    Tooth mobility can be assessed by both subjective and objective means. The use of subjective measures may lead to bias and hence it becomes imperative to use objective means to assess tooth mobility. It has also been observed that hormonal fluctuations may have significantly influence tooth mobility. Aims: The study was undertaken to assess the reproducibility of periotest in the assessment of tooth mobility and, to unravel the obscurity associated with the hormonal influence on tooth mobility. Materials and Methods: 100 subjects were included in the study and were divided equally into two groups based on their age, group I (11-14 years) and group II(16-22 years). Results: There was no statistical significant difference between the periotest values (PTV) taken at two different time periods with a time difference of 20 minutes. PTV of group I was found to have a statistical significant greater PTV than group II. Conclusion: Periotest can reliably measure tooth mobility. Tooth mobility is greater during puberty as compared to adolescence and during adolescence mobility was slightly greater in males. PMID:25684904

  11. Examination of the Position Accuracy of Implant Abutments Reproduced by Intra-Oral Optical Impression

    PubMed Central

    Odaira, Chikayuki; Kobayashi, Takuya; Kondo, Hisatomo

    2016-01-01

    An impression technique called optical impression using intraoral scanner has attracted attention in digital dentistry. This study aimed to evaluate the accuracy of the optical impression, comparing a virtual model reproduced by an intraoral scanner to a working cast made by conventional silicone impression technique. Two implants were placed on a master model. Working casts made of plaster were fabricated from the master model by silicone impression. The distance between the ball abutments and the angulation between the healing abutments of 5 mm and 7 mm height at master model were measured using Computer Numerical Control Coordinate Measuring Machine (CNCCMM) as control. Working casts were then measured using CNCCMM, and virtual models via stereo lithography data of master model were measured by a three-dimensional analyzing software. The distance between ball abutments of the master model was 9634.9 ± 1.2 μm. The mean values of trueness of the Lava COS and working casts were 64.5 μm and 22.5 μm, respectively, greater than that of control. The mean of precision values of the Lava COS and working casts were 15.6 μm and 13.5 μm, respectively. In the case of a 5-mm-height healing abutment, mean angulation error of the Lava COS was greater than that of the working cast, resulting in significant differences in trueness and precision. However, in the case of a 7-mm-height abutment, mean angulation errors of the Lava COS and the working cast were not significantly different in trueness and precision. Therefore, distance errors of the optical impression were slightly greater than those of conventional impression. Moreover, the trueness and precision of angulation error could be improved in the optical impression using longer healing abutments. In the near future, the development of information technology could enable improvement in the accuracy of the optical impression with intraoral scanners. PMID:27706225

  12. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  13. Precision and accuracy in diffusion tensor magnetic resonance imaging.

    PubMed

    Jones, Derek K

    2010-04-01

    This article reviews some of the key factors influencing the accuracy and precision of quantitative metrics derived from diffusion magnetic resonance imaging data. It focuses on the study pipeline beginning at the choice of imaging protocol, through preprocessing and model fitting up to the point of extracting quantitative estimates for subsequent analysis. The aim was to provide the newcomers to the field with sufficient knowledge of how their decisions at each stage along this process might impact on precision and accuracy, to design their study/approach, and to use diffusion tensor magnetic resonance imaging in the clinic. More specifically, emphasis is placed on improving accuracy and precision. I illustrate how careful choices along the way can substantially affect the sample size needed to make an inference from the data.

  14. Accuracy and precision of temporal artery thermometers in febrile patients.

    PubMed

    Wolfson, Margaret; Granstrom, Patsy; Pomarico, Bernie; Reimanis, Cathryn

    2013-01-01

    The noninvasive temporal artery thermometer offers a way to measure temperature when oral assessment is contraindicated, uncomfortable, or difficult to obtain. In this study, the accuracy and precision of the temporal artery thermometer exceeded levels recommended by experts for use in acute care clinical practice.

  15. Accuracy-precision trade-off in visual orientation constancy.

    PubMed

    De Vrijer, M; Medendorp, W P; Van Gisbergen, J A M

    2009-02-09

    Using the subjective visual vertical task (SVV), previous investigations on the maintenance of visual orientation constancy during lateral tilt have found two opposite bias effects in different tilt ranges. The SVV typically shows accurate performance near upright but severe undercompensation at tilts beyond 60 deg (A-effect), frequently with slight overcompensation responses (E-effect) in between. Here we investigate whether a Bayesian spatial-perception model can account for this error pattern. The model interprets A- and E-effects as the drawback of a computational strategy, geared at maintaining visual stability with optimal precision at small tilt angles. In this study, we test whether these systematic errors can be seen as the consequence of a precision-accuracy trade-off when combining a veridical but noisy signal about eye orientation in space with the visual signal. To do so, we used a psychometric approach to assess both precision and accuracy of the SVV in eight subjects laterally tilted at 9 different tilt angles (-120 degrees to 120 degrees). Results show that SVV accuracy and precision worsened with tilt angle, according to a pattern that could be fitted quite adequately by the Bayesian model. We conclude that spatial vision essentially follows the rules of Bayes' optimal observer theory.

  16. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  17. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  18. Assessing the accuracy and reproducibility of modality independent elastography in a murine model of breast cancer

    PubMed Central

    Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.

    2015-01-01

    Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120

  19. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  20. Gamma-Ray Peak Integration: Accuracy and Precision

    SciTech Connect

    Richard M. Lindstrom

    2000-11-12

    The accuracy of singlet gamma-ray peak areas obtained by a peak analysis program is immaterial. If the same algorithm is used for sample measurement as for calibration and if the peak shapes are similar, then biases in the integration method cancel. Reproducibility is the only important issue. Even the uncertainty of the areas computed by the program is trivial because the true standard uncertainty can be experimentally assessed by repeated measurements of the same source. Reproducible peak integration was important in a recent standard reference material certification task. The primary tool used for spectrum analysis was SUM, a National Institute of Standards and Technology interactive program to sum peaks and subtract a linear background, using the same channels to integrate all 20 spectra. For comparison, this work examines other peak integration programs. Unlike some published comparisons of peak performance in which synthetic spectra were used, this experiment used spectra collected for a real (though exacting) analytical project, analyzed by conventional software used in routine ways. Because both components of the 559- to 564-keV doublet are from {sup 76}As, they were integrated together with SUM. The other programs, however, deconvoluted the peaks. A sensitive test of the fitting algorithm is the ratio of reported peak areas. In almost all the cases, this ratio was much more variable than expected from the reported uncertainties reported by the program. Other comparisons to be reported indicate that peak integration is still an imperfect tool in the analysis of gamma-ray spectra.

  1. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  2. High-Reproducibility and High-Accuracy Method for Automated Topic Classification

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes

    2015-01-01

    Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.

  3. Study of highly precise outdoor characterization technique for photovoltaic modules in terms of reproducibility

    NASA Astrophysics Data System (ADS)

    Fukabori, Akihiro; Takenouchi, Takakazu; Matsuda, Youji; Tsuno, Yuki; Hishikawa, Yoshihiro

    2015-08-01

    In this study, novel outdoor measurements were conducted for highly precise characterization of photovoltaic (PV) modules by measuring current-voltage (I-V) curves with fast sweep speeds and module’s temperature, and with a PV sensor for reference. Fast sweep speeds suppressed the irradiance variation. As a result, smooth I-V curves were obtained and the PV parameter deviation was suppressed. The module’s temperature was measured by attaching resistive temperature detector sensors on the module’s backsheet. The PV sensor was measured synchronously with the PV module. The PV parameters including Isc, Pmax, Voc, and FF were estimated after correcting the I-V curves using the IEC standards. The reproducibility of Isc, Pmax, Voc, and FF relative to the outdoor fits was evaluated as 0.43, 0.58, 0.24, and 0.23%, respectively. The results demonstrate that highly precise measurements are possible using a PV measurement system with the three above-mentioned features.

  4. Accuracy and reproducibility of low dose insulin administration using pen-injectors and syringes

    PubMed Central

    Gnanalingham, M; Newland, P; Smith, C

    1998-01-01

    Many children with diabetes require small doses of insulin administered with syringes or pen-injector devices (at the Booth Hall Paediatric Diabetic Clinic, 20% of children aged 0-5 years receive 1-2 U insulin doses). To determine how accurately and reproducibly small doses are delivered, 1, 2, 5, and 10 U doses of soluble insulin (100 U/ml) were dispensed in random order 15times from five new NovoPens (1.5 ml), five BD-Pens (1.5 ml), and by five nurses using 30 U syringes. Each dose was weighed, and intended and actual doses compared. The two pen-injectors delivered less insulin than syringes, differences being inversely proportional to dose. For 1 U (mean (SD)): 0.89 (0.04) U (NovoPen), 0.92 (0.03) U (BD-Pen), 1.23 (0.09) U (syringe); and for 10 U: 9.8 (0.1) U (NovoPen), 9.9 (0.1) U (BD-Pen), 10.1 (0.1) U (syringe). The accuracy (percentage errors) of the pen-injectors was similar and more accurate than syringes delivering 1, 2, and 5 U of insulin. Errors for 1 U: 11(4)% (NovoPen), 8(3)% (BD-Pen), 23(9)% (syringe). The reproducibility (coefficient of variation) of actual doses was similar (< 7%) for all three devices, which were equally consistent at underdosing (pen-injectors) or overdosing (syringes) insulin. All three devices, especially syringes, are unacceptably inaccurate when delivering 1 U doses of insulin. Patients on low doses need to be educated that their dose may alter when they transfer from one device to another.

 PMID:9771255

  5. A comprehensive investigation of the accuracy and reproducibility of a multitarget single isocenter VMAT radiosurgery technique

    SciTech Connect

    Thomas, Andrew; Niebanck, Michael; Juang, Titania; Wang, Zhiheng; Oldham, Mark

    2013-12-15

    matched the treatment plan, demonstrating high accuracy and reproducibility of both the treatment machine and the IGRT procedure. The complexity of the treatment (multiple arcs) and dosimetry (multiple strong gradients) pose a substantial challenge for comprehensive verification. 3D dosimetry can be uniquely effective in this scenario.

  6. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  7. Accuracy, reproducibility, and interpretation of fatty acid methyl ester profiles of model bacterial communities

    USGS Publications Warehouse

    Kidd, Haack S.; Garchow, H.; Odelson, D.A.; Forney, L.J.; Klug, M.J.

    1994-01-01

    We determined the accuracy and reproducibility of whole-community fatty acid methyl ester (FAME) analysis with two model bacterial communities differing in composition by using the Microbial ID, Inc. (MIDI), system. The biomass, taxonomic structure, and expected MIDI-FAME profiles under a variety of environmental conditions were known for these model communities a priori. Not all members of each community could be detected in the composite profile because of lack of fatty acid 'signatures' in some isolates or because of variations (approximately fivefold) in fatty acid yield across taxa. MIDI- FAME profiles of replicate subsamples of a given community were similar in terms of fatty acid yield per unit of community dry weight and relative proportions of specific fatty acids. Principal-components analysis (PCA) of MIDI-FAME profiles resulted in a clear separation of the two different communities and a clustering of replicates of each community from two separate experiments on the first PCA axis. The first PCA axis accounted for 57.1% of the variance in the data and was correlated with fatty acids that varied significantly between communities and reflected the underlying community taxonomic structure. On the basis of our data, community fatty acid profiles can be used to assess the relative similarities and differences of microbial communities that differ in taxonomic composition. However, detailed interpretation of community fatty acid profiles in terms of biomass or community taxonomic composition must be viewed with caution until our knowledge of the quantitative and qualitative distribution of fatty acids over a wide variety of taxa and the effects of growth conditions on fatty acid profiles is more extensive.

  8. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  9. Accuracy, Precision, and Reliability of Chemical Measurements in Natural Products Research

    PubMed Central

    Betz, Joseph M.; Brown, Paula N.; Roman, Mark C.

    2010-01-01

    Natural products chemistry is the discipline that lies at the heart of modern pharmacognosy. The field encompasses qualitative and quantitative analytical tools that range from spectroscopy and spectrometry to chromatography. Among other things, modern research on crude botanicals is engaged in the discovery of the phytochemical constituents necessary for therapeutic efficacy, including the synergistic effects of components of complex mixtures in the botanical matrix. In the phytomedicine field, these botanicals and their contained mixtures are considered the active pharmaceutical ingredient (API), and pharmacognosists are increasingly called upon to supplement their molecular discovery work by assisting in the development and utilization of analytical tools for assessing the quality and safety of these products. Unlike single-chemical entity APIs, botanical raw materials and their derived products are highly variable because their chemistry and morphology depend on the genotypic and phenotypic variation, geographical origin and weather exposure, harvesting practices, and processing conditions of the source material. Unless controlled, this inherent variability in the raw material stream can result in inconsistent finished products that are under-potent, over-potent, and/or contaminated. Over the decades, natural products chemists have routinely developed quantitative analytical methods for phytochemicals of interest. Quantitative methods for the determination of product quality bear the weight of regulatory scrutiny. These methods must be accurate, precise, and reproducible. Accordingly, this review discusses the principles of accuracy (relationship between experimental and true value), precision (distribution of data values), and reliability in the quantitation of phytochemicals in natural products. PMID:20884340

  10. [Accuracy and precision in the evaluation of computer assisted surgical systems. A definition].

    PubMed

    Strauss, G; Hofer, M; Korb, W; Trantakis, C; Winkler, D; Burgert, O; Schulz, T; Dietz, A; Meixensberger, J; Koulechov, K

    2006-02-01

    Accuracy represents the outstanding criterion for navigation systems. Surgeons have noticed a great discrepancy between the values from the literature and system specifications on one hand, and intraoperative accuracy on the other. A unitary understanding for the term accuracy does not exist in clinical practice. Furthermore, an incorrect equality for the terms precision and accuracy can be found in the literature. On top of this, clinical accuracy differs from mechanical (technical) accuracy. From a clinical point of view, we had to deal with remarkably many different terms all describing accuracy. This study has the goals of: 1. Defining "accuracy" and related terms, 2. Differentiating between "precision" and "accuracy", 3. Deriving the term "surgical accuracy", 4. Recommending use of the the term "surgical accuracy" for a navigation system. To a great extent, definitions were applied from the International Standardisation Organisation-ISO and the norm from the Deutsches Institut für Normung e.V.-DIN (the German Institute for Standardization). For defining surgical accuracy, the terms reference value, expectation, accuracy and precision are of major interest. Surgical accuracy should indicate the maximum values for the deviation between test results and the reference value (true value) A(max), and additionally indicate precision P(surg). As a basis for measurements, a standardized technical model was used. Coordinates of the model were acquired by CT. To determine statistically and reality relevant results for head surgery, 50 measurements with an accuracy of 50, 75, 100 and 150 mm from the centre of the registration geometry are adequate. In the future, we recommend labeling the system's overall performance with the following specifications: maximum accuracy deviation A(max), precision P and information on the measurement method. This could be displayed on a seal of quality.

  11. Accuracy and precision in measurements of biomass oxidative ratios

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.

    2005-12-01

    One fundamental property of the Earth system is the oxidative ratio (OR) of the terrestrial biosphere, or the mols CO2 fixed per mols O2 released via photosynthesis. This is also an essential, poorly constrained parameter in the calculation of the size of the terrestrial and oceanic carbon sinks via atmospheric O2 and CO2 measurements. We are pursuing a number of techniques to accurately measure natural variations in above- and below-ground OR. For aboveground biomass, OR can be calculated directly from percent C, H, N, and O data measured via elemental analysis; however, the precision of this technique is a function of 4 measurements, resulting in increased data variability. It is also possible to measure OR via bomb calorimetry and percent C, using relationships between the heat of combustion of a sample and its OR. These measurements hold the potential for generation of more precise data, as error depends only on 2 measurements instead of 4. We present data comparing these two OR measurement techniques.

  12. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  13. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  14. Accuracy and precision of silicon based impression media for quantitative areal texture analysis.

    PubMed

    Goodall, Robert H; Darras, Laurent P; Purnell, Mark A

    2015-05-20

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  15. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  16. Analysis of factors affecting the accuracy, reproducibility, and interpretation of microbial community carbon source utilization patterns

    USGS Publications Warehouse

    Haack, S.K.; Garchow, H.; Klug, M.J.; Forney, L.J.

    1995-01-01

    We determined factors that affect responses of bacterial isolates and model bacterial communities to the 95 carbon substrates in Biolog microliter plates. For isolates and communities of three to six bacterial strains, substrate oxidation rates were typically nonlinear and were delayed by dilution of the inoculum. When inoculum density was controlled, patterns of positive and negative responses exhibited by microbial communities to each of the carbon sources were reproducible. Rates and extents of substrate oxidation by the communities were also reproducible but were not simply the sum of those exhibited by community members when tested separately. Replicates of the same model community clustered when analyzed by principal- components analysis (PCA), and model communities with different compositions were clearly separated un the first PCA axis, which accounted for >60% of the dataset variation. PCA discrimination among different model communities depended on the extent to which specific substrates were oxidized. However, the substrates interpreted by PCA to be most significant in distinguishing the communities changed with reading time, reflecting the nonlinearity of substrate oxidation rates. Although whole-community substrate utilization profiles were reproducible signatures for a given community, the extent of oxidation of specific substrates and the numbers or activities of microorganisms using those substrates in a given community were not correlated. Replicate soil samples varied significantly in the rate and extent of oxidation of seven tested substrates, suggesting microscale heterogeneity in composition of the soil microbial community.

  17. Factors influencing accuracy and reproducibility of body resistance measurements by foot-to-foot impedancemeters.

    PubMed

    Bousbiat, Sana; Jaffrin, Michel; Assadi, Imen

    2015-01-01

    The electronics of a BodySignal V2 (Tefal, France) foot-to-foot impedancemeter (FFI) was modified to display the foot-to-foot resistance instead of body fat. This device was connected to electrodes of different sizes mounted on a podoscope permitting photographs of subjects feet soles and electrodes in order to calculate the contact area between feet and electrodes. The foot-to-foot resistance was found to decrease when the contact area of feet with current and voltage electrodes increased. It was also sensitive to feet displacement and a backward move of 5 cm increased the mean resistance by 37 Ω. The resistance reproducibility was tested by asking the subject to repeat measurements 10-times by stepping up and down from the podoscope. The mean SD of these tests was 0.88% of mean resistance, but it fell to 0.47% when feet position was guided and to 0.29% with transverse voltage electrodes. For good reproducibility, it is important that voltage electrodes be small and that the scale design facilitates a correct position of heels on these electrodes.

  18. Failure of the Woods-Saxon nuclear potential to simultaneously reproduce precise fusion and elastic scattering measurements

    SciTech Connect

    Mukherjee, A.; Hinde, D. J.; Dasgupta, M.; Newton, J. O.; Butt, R. D.; Hagino, K.

    2007-04-15

    A precise fusion excitation function has been measured for the {sup 12}C+{sup 208}Pb reaction at energies around the barrier, allowing the fusion barrier distribution to be extracted. The fusion cross sections at high energies differ significantly from existing fusion data. Coupled reaction channels calculations have been carried out with the code FRESCO. A bare potential previously claimed to uniquely describe a wide range of {sup 12}C+{sup 208}Pb near-barrier reaction channels failed to reproduce the new fusion data. The nuclear potential diffuseness of 0.95 fm which fits the fusion excitation function over a broad energy range fails to reproduce the elastic scattering. A diffuseness of 0.55 fm reproduces the fusion barrier distribution and elastic scattering data, but significantly overpredicts the fusion cross sections at high energies. This may be due to physical processes not included in the calculations. To constrain calculations, it is desirable to have precisely measured fusion cross sections, especially at energies around the barrier.

  19. Quantitative analysis of steroid profiles from urine by capillary gas chromatography. I. Accuracy and reproducibility of the sample preparation.

    PubMed

    Leunissen, W J; Thijssen, J H

    1978-11-01

    A method is described for the determination of steroid profiles from urine by means of gas chromatography using high-efficiency glass capillary columns. The accuracy and reproducibility of essential steps in the sample preparation (extraction of steroids and steroid conjugates by means of XAD-2, enzymatic hydrolysis with Helix pomatia juice, solvolysis in acidified ethyl acetate and alkali wash) are established using different endogenously labelled urine samples, obtained from normal subjects to whom labelled steroids had been administered. Preliminary results are given on the reproducibility of the derivatization procedure (formation of methoxime-trimethylsilyl (MO-TMS) ethers), the gas chromatographic analysis and the whole method. Two procedures for the purification of MO-TMS steroid derivatives are compared. Application of the method to urine samples of patients with various endocrine disorders is included.

  20. Craniofacial skeletal measurements based on computed tomography: Part I. Accuracy and reproducibility.

    PubMed

    Waitzman, A A; Posnick, J C; Armstrong, D C; Pron, G E

    1992-03-01

    Computed tomography (CT) is a useful modality for the management of craniofacial anomalies. A study was undertaken to assess whether CT measurements of the upper craniofacial skeleton accurately represent the bony region imaged. Measurements taken directly from five dry skulls (approximate ages: adults, over 18 years; child, 4 years; infant, 6 months) were compared to those from axial CT scans of these skulls. Excellent agreement was found between the direct (dry skull) and indirect (CT) measurements. The effect of head tilt on the accuracy of these measurements was investigated. The error was within clinically acceptable limits (less than 5 percent) if the angle was no more than +/- 4 degrees from baseline (0 degrees). Objective standardized information gained from CT should complement the subjective clinical data usually collected for the treatment of craniofacial deformities. PMID:1571344

  1. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  2. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    PubMed

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates.

  3. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  4. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  5. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  6. "High-precision, reconstructed 3D model" of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data.

    PubMed

    Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi

    2016-01-01

    Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans. PMID:26832374

  7. CT guidance is needed to achieve reproducible positioning of the mouse head for repeat precision cranial irradiation.

    PubMed

    Armour, M; Ford, E; Iordachita, I; Wong, J

    2010-01-01

    To study the effects of cranial irradiation, we have constructed an all-plastic mouse bed equipped with an immobilizing head holder. The bed integrates with our in-house Small Animal Radiation Research Platform (SARRP) for precision focal irradiation experiments and cone-beam CT. We assessed the reproducibility of our head holder to determine the need for CT-based targeting in cranial irradiation studies. To measure the holder's reproducibility, a C57BL/6 mouse was positioned and CT-scanned nine times. Image sets were loaded into the Pinnacle(3) radiation treatment planning system and were registered to one another by one investigator using rigid body alignment of the cranial regions. Rotational and translational offsets were measured. The average vector shift between scans was 0.80 +/- 0.49 mm. Such a shift is too large to selectively treat subregions of the mouse brain. In response, we use onboard imaging to guide cranial irradiation applications that require sub-millimeter precision.

  8. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  9. Simulations of thermally transferred OSL signals in quartz: Accuracy and precision of the protocols for equivalent dose evaluation

    NASA Astrophysics Data System (ADS)

    Pagonis, Vasilis; Adamiec, Grzegorz; Athanassas, C.; Chen, Reuven; Baker, Atlee; Larsen, Meredith; Thompson, Zachary

    2011-06-01

    reproduce accurately the natural doses in the range 0-400 Gy with approximately the same intrinsic precision and accuracy of ˜1-5%. However, these protocols underestimate doses above 400 Gy; possible sources of this underestimation are investigated. Two possible explanations are suggested for the modeled underestimations, possible thermal instability of the TT-OSL traps, and the presence of thermally unstable medium OSL components in the model.

  10. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy.

    PubMed

    Schaufele, Fred

    2014-03-15

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal.

  11. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  12. The reproducibility and accuracy of internal fit of Cerec 3D CAD/CAM all ceramic crowns.

    PubMed

    D'Arcy, Brian L; Omer, Osama E; Byrne, Declan A; Quinn, Frank

    2009-06-01

    The objective of this study was to evaluate the reproducibility and accuracy of internal fit using Cerec 3D CAD/CAM (computer aided design/computer aided manufacturing) all-ceramic crowns and to investigate the proximal contact point areas between the crowns and neighbouring teeth, in terms of location and the presence or absence of contact. A total of 48 crowns were milled and divided into two groups of twenty-four each. One group consisted of testing a Control die and the other group consisted of testing single Replica stone die duplicates of the Control die. The Internal Marginal Gap, Axio-Occlusal Transition Gap and Occlusal Gap were measured on each crown in both groups. No significant differences were identified between the mean thickness of the Marginal Gap, the Axio-Occlusal Transition Gap and the Occlusal Gap of the Control die when compared with the Replica dies indicating uniformity and consistency of the accuracy of fit and therefore die replication.

  13. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  14. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  15. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  16. Measuring changes in Plasmodium falciparum transmission: precision, accuracy and costs of metrics.

    PubMed

    Tusting, Lucy S; Bousema, Teun; Smith, David L; Drakeley, Chris

    2014-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review 11 metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs and presenting an overall critique. We also review the nonlinear scaling relationships between five metrics of malaria transmission: the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our chapter highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, seroconversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection.

  17. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  18. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  19. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  20. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  1. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  2. Cascade impactor (CI) mensuration--an assessment of the accuracy and precision of commercially available optical measurement systems.

    PubMed

    Chambers, Frank; Ali, Aziz; Mitchell, Jolyon; Shelton, Christopher; Nichols, Steve

    2010-03-01

    Multi-stage cascade impactors (CIs) are the preferred measurement technique for characterizing the aerodynamic particle size distribution of an inhalable aerosol. Stage mensuration is the recommended pharmacopeial method for monitoring CI "fitness for purpose" within a GxP environment. The Impactor Sub-Team of the European Pharmaceutical Aerosol Group has undertaken an inter-laboratory study to assess both the precision and accuracy of a range of makes and models of instruments currently used for optical inspection of impactor stages. Measurement of two Andersen 8-stage 'non-viable' cascade impactor "reference" stages that were representative of jet sizes for this instrument type (stages 2 and 7) confirmed that all instruments evaluated were capable of reproducible jet measurement, with the overall capability being within the current pharmacopeial stage specifications for both stages. In the assessment of absolute accuracy, small, but consistent differences (ca. 0.6% of the certified value) observed between 'dots' and 'spots' of a calibrated chromium-plated reticule were observed, most likely the result of treatment of partially lit pixels along the circumference of this calibration standard. Measurements of three certified ring gauges, the smallest having a nominal diameter of 1.0 mm, were consistent with the observation where treatment of partially illuminated pixels at the periphery of the projected image can result in undersizing. However, the bias was less than 1% of the certified diameter. The optical inspection instruments evaluated are fully capable of confirming cascade impactor suitability in accordance with pharmacopeial practice.

  3. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    NASA Astrophysics Data System (ADS)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  4. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  5. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler.

    PubMed

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-10-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  6. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  7. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  8. Accuracy, precision, usability, and cost of free chlorine residual testing methods.

    PubMed

    Murray, Anna; Lantagne, Daniele

    2015-03-01

    Chlorine is the most widely used disinfectant worldwide, partially because residual protection is maintained after treatment. This residual is measured using colorimetric test kits varying in accuracy, precision, training required, and cost. Seven commercially available colorimeters, color wheel and test tube comparator kits, pool test kits, and test strips were evaluated for use in low-resource settings by: (1) measuring in quintuplicate 11 samples from 0.0-4.0 mg/L free chlorine residual in laboratory and natural light settings to determine accuracy and precision; (2) conducting volunteer testing where participants used and evaluated each test kit; and (3) comparing costs. Laboratory accuracy ranged from 5.1-40.5% measurement error, with colorimeters the most accurate and test strip methods the least. Variation between laboratory and natural light readings occurred with one test strip method. Volunteer participants found test strip methods easiest and color wheel methods most difficult, and were most confident in the colorimeter and least confident in test strip methods. Costs range from 3.50-444 USD for 100 tests. Application of a decision matrix found colorimeters and test tube comparator kits were most appropriate for use in low-resource settings; it is recommended users apply the decision matrix themselves, as the appropriate kit might vary by context.

  9. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  10. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    PubMed

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  11. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision.

  12. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  13. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  14. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  15. Accuracy, precision, and method detection limits of quantitative PCR for airborne bacteria and fungi.

    PubMed

    Hospodsky, Denina; Yamamoto, Naomichi; Peccia, Jordan

    2010-11-01

    Real-time quantitative PCR (qPCR) for rapid and specific enumeration of microbial agents is finding increased use in aerosol science. The goal of this study was to determine qPCR accuracy, precision, and method detection limits (MDLs) within the context of indoor and ambient aerosol samples. Escherichia coli and Bacillus atrophaeus vegetative bacterial cells and Aspergillus fumigatus fungal spores loaded onto aerosol filters were considered. Efficiencies associated with recovery of DNA from aerosol filters were low, and excluding these efficiencies in quantitative analysis led to underestimating the true aerosol concentration by 10 to 24 times. Precision near detection limits ranged from a 28% to 79% coefficient of variation (COV) for the three test organisms, and the majority of this variation was due to instrument repeatability. Depending on the organism and sampling filter material, precision results suggest that qPCR is useful for determining dissimilarity between two samples only if the true differences are greater than 1.3 to 3.2 times (95% confidence level at n = 7 replicates). For MDLs, qPCR was able to produce a positive response with 99% confidence from the DNA of five B. atrophaeus cells and less than one A. fumigatus spore. Overall MDL values that included sample processing efficiencies ranged from 2,000 to 3,000 B. atrophaeus cells per filter and 10 to 25 A. fumigatus spores per filter. Applying the concepts of accuracy, precision, and MDL to qPCR aerosol measurements demonstrates that sample processing efficiencies must be accounted for in order to accurately estimate bioaerosol exposure, provides guidance on the necessary statistical rigor required to understand significant differences among separate aerosol samples, and prevents undetected (i.e., nonquantifiable) values for true aerosol concentrations that may be significant.

  16. Accuracy and Reproducibility in Quantification of Plasma Protein Concentrations by Mass Spectrometry without the Use of Isotopic Standards

    PubMed Central

    Kramer, Gertjan; Woolerton, Yvonne; van Straalen, Jan P.; Vissers, Johannes P. C.; Dekker, Nick; Langridge, James I.; Beynon, Robert J.; Speijer, Dave; Sturk, Auguste; Aerts, Johannes M. F. G.

    2015-01-01

    Background Quantitative proteomic analysis with mass spectrometry holds great promise for simultaneously quantifying proteins in various biosamples, such as human plasma. Thus far, studies addressing the reproducible measurement of endogenous protein concentrations in human plasma have focussed on targeted analyses employing isotopically labelled standards. Non-targeted proteomics, on the other hand, has been less employed to this end, even though it has been instrumental in discovery proteomics, generating large datasets in multiple fields of research. Results Using a non-targeted mass spectrometric assay (LCMSE), we quantified abundant plasma proteins (43 mg/mL—40 ug/mL range) in human blood plasma specimens from 30 healthy volunteers and one blood serum sample (ProteomeXchange: PXD000347). Quantitative results were obtained by label-free mass spectrometry using a single internal standard to estimate protein concentrations. This approach resulted in quantitative results for 59 proteins (cut off ≥11 samples quantified) of which 41 proteins were quantified in all 31 samples and 23 of these with an inter-assay variability of ≤ 20%. Results for 7 apolipoproteins were compared with those obtained using isotope-labelled standards, while 12 proteins were compared to routine immunoassays. Comparison of quantitative data obtained by LCMSE and immunoassays showed good to excellent correlations in relative protein abundance (r = 0.72–0.96) and comparable median concentrations for 8 out of 12 proteins tested. Plasma concentrations of 56 proteins determined by LCMSE were of similar accuracy as those reported by targeted studies and 7 apolipoproteins quantified by isotope-labelled standards, when compared to reference concentrations from literature. Conclusions This study shows that LCMSE offers good quantification of relative abundance as well as reasonable estimations of concentrations of abundant plasma proteins. PMID:26474480

  17. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  18. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  19. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  20. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  1. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  2. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  3. Accuracy and precision of integumental linear dimensions in a three-dimensional facial imaging system

    PubMed Central

    Kim, Soo-Hwan; Jung, Woo-Young; Seo, Yu-Jin; Kim, Kyung-A; Park, Ki-Ho

    2015-01-01

    Objective A recently developed facial scanning method uses three-dimensional (3D) surface imaging with a light-emitting diode. Such scanning enables surface data to be captured in high-resolution color and at relatively fast speeds. The purpose of this study was to evaluate the accuracy and precision of 3D images obtained using the Morpheus 3D® scanner (Morpheus Co., Seoul, Korea). Methods The sample comprised 30 subjects aged 24-34 years (mean 29.0 ± 2.5 years). To test the correlation between direct and 3D image measurements, 21 landmarks were labeled on the face of each subject. Sixteen direct measurements were obtained twice using digital calipers; the same measurements were then made on two sets of 3D facial images. The mean values of measurements obtained from both methods were compared. To investigate the precision, a comparison was made between two sets of measurements taken with each method. Results When comparing the variables from both methods, five of the 16 possible anthropometric variables were found to be significantly different. However, in 12 of the 16 cases, the mean difference was under 1 mm. The average value of the differences for all variables was 0.75 mm. Precision was high in both methods, with error magnitudes under 0.5 mm. Conclusions 3D scanning images have high levels of precision and fairly good congruence with traditional anthropometry methods, with mean differences of less than 1 mm. 3D surface imaging using the Morpheus 3D® scanner is therefore a clinically acceptable method of recording facial integumental data. PMID:26023538

  4. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  5. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  6. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  7. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  8. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer. PMID:26193146

  9. Accuracy improvement of protrusion angle of carbon nanotube tips by precision multiaxis nanomanipulator

    SciTech Connect

    Young Song, Won; Young Jung, Ki; O, Beom-Hoan; Park, Byong Chon

    2005-02-01

    In order to manufacture a carbon nanotube (CNT) tip in which the attachment angle and position of CNT were precisely adjusted, a nanomanipulator was installed inside a scanning electron microscope (SEM). A CNT tip, atomic force microscopy (AFM) probe to which a nanotube is attached, is known to be the most appropriate probe for measuring the shape of high aspect ratio. The developed nanomanipulator has two sets of modules with the degree of freedom of three-directional rectilinear motion and one-directional rotational motion at an accuracy of tens of nanometers, so it enables the manufacturing of more accurate CNT tips. The present study developed a CNT tip with the error of attachment angle less then 10 deg. through three-dimensional operation of a multiwalled carbon nanotube and an AFM probe inside a SEM.

  10. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  11. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  12. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  13. Transfer accuracy and precision scoring in planar bone cutting validated with ex vivo data.

    PubMed

    Milano, Federico Edgardo; Ritacco, Lucas Eduardo; Farfalli, Germán Luis; Bahamonde, Luis Alberto; Aponte-Tinao, Luis Alberto; Risk, Marcelo

    2015-05-01

    The use of interactive surgical scenarios for virtual preoperative planning of osteotomies has increased in the last 5 years. As it has been reported by several authors, this technology has been used in tumor resection osteotomies, knee osteotomies, and spine surgery with good results. A digital three-dimensional preoperative plan makes possible to quantitatively evaluate the transfer process from the virtual plan to the anatomy of the patient. We introduce an exact definition of accuracy and precision of this transfer process for planar bone cutting. We present a method to compute these properties from ex vivo data. We also propose a clinical score to assess the goodness of a cut. A computer simulation is used to characterize the definitions and the data generated by the measurement method. The definitions and method are evaluated in 17 ex vivo planar cuts of tumor resection osteotomies. The results show that the proposed method and definitions are highly correlated with a previous definition of accuracy based in ISO 1101. The score is also evaluated by showing that it distinguishes among different transfer techniques based in its distribution location and shape. The introduced definitions produce acceptable results in cases where the ISO-based definition produce counter intuitive results.

  14. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot.

  15. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  16. Systematic accuracy and precision analysis of video motion capturing systems--exemplified on the Vicon-460 system.

    PubMed

    Windolf, Markus; Götzen, Nils; Morlock, Michael

    2008-08-28

    With rising demand on highly accurate acquisition of small motion the use of video-based motion capturing becomes more and more popular. However, the performance of these systems strongly depends on a variety of influencing factors. A method was developed in order to systematically assess accuracy and precision of motion capturing systems with regard to influential system parameters. A calibration and measurement robot was designed to perform a repeatable dynamic calibration and to determine the resultant system accuracy and precision in a control volume investigating small motion magnitudes (180 x 180 x 150 mm3). The procedure was exemplified on the Vicon-460 system. Following parameters were analyzed: Camera setup, calibration volume, marker size and lens filter application. Equipped with four cameras the Vicon-460 system provided an overall accuracy of 63+/-5 microm and overall precision (noise level) of 15 microm for the most favorable parameter setting. Arbitrary changes in camera arrangement revealed variations in mean accuracy between 76 and 129 microm. The noise level normal to the cameras' projection plane was found higher compared to the other coordinate directions. Measurements including regions unaffected by the dynamic calibration reflected considerably lower accuracy (221+/-79 microm). Lager marker diameters led to higher accuracy and precision. Accuracy dropped significantly when using an optical lens filter. This study revealed significant influence of the system environment on the performance of video-based motion capturing systems. With careful configuration, optical motion capturing provides a powerful measuring opportunity for the majority of biomechanical applications.

  17. Improving accuracy and precision in biological applications of fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Wei

    The quantitative understanding of cellular and molecular responses in living cells is important for many reasons, including identifying potential molecular targets for treatments of diseases like cancer. Fluorescence lifetime imaging microscopy (FLIM) can quantitatively measure these responses in living cells by producing spatially resolved images of fluorophore lifetime, and has advantages over intensity-based measurements. However, in live-cell microscopy applications using high-intensity light sources such as lasers, maintaining biological viability remains critical. Although high-speed, time-gated FLIM significantly reduces light delivered to live cells, making measurements at low light levels remains a challenge affecting quantitative FLIM results. We can significantly improve both accuracy and precision in gated FLIM applications. We use fluorescence resonance energy transfer (FRET) with fluorescent proteins to detect molecular interactions in living cells: the use of FLIM, better fluorophores, and temperature/CO2 controls can improve live-cell FRET results with higher consistency, better statistics, and less non-specific FRET (for negative control comparisons, p-value = 0.93 (physiological) vs. 9.43E-05 (non-physiological)). Several lifetime determination methods are investigated to optimize gating schemes. We demonstrate a reduction in relative standard deviation (RSD) from 52.57% to 18.93% with optimized gating in an example under typical experimental conditions. We develop two novel total variation (TV) image denoising algorithms, FWTV ( f-weighted TV) and UWTV (u-weighted TV), that can achieve significant improvements for real imaging systems. With live-cell images, they improve the precision of local lifetime determination without significantly altering the global mean lifetime values (<5% lifetime changes). Finally, by combining optimal gating and TV denoising, even low-light excitation can achieve precision better than that obtained in high

  18. Parallaxes and Proper Motions of QSOs: A Test of Astrometric Precision and Accuracy

    NASA Astrophysics Data System (ADS)

    Harris, Hugh C.; Dahn, Conard C.; Zacharias, Norbert; Canzian, Blaise; Guetter, Harry H.; Levine, Stephen E.; Luginbuhl, Christian B.; Monet, Alice K. B.; Monet, David G.; Pier, Jeffrey R.; Stone, Ronald C.; Subasavage, John P.; Tilleman, Trudy; Walker, Richard L.; Johnston, Kenneth J.

    2016-11-01

    Optical astrometry of 12 fields containing quasi-stellar objects (QSOs) is presented. The targets are radio sources in the International Celestial Reference Frame with accurate radio positions that also have optical counterparts. The data are used to test several quantities: the internal precision of the relative optical astrometry, the relative parallaxes and proper motions, the procedures to correct from relative to absolute parallax and proper motion, the accuracy of the absolute parallaxes and proper motions, and the stability of the optical photocenters for these optically variable QSOs. For these 12 fields, the mean error in absolute parallax is 0.38 mas and the mean error in each coordinate of absolute proper motion is 1.1 mas yr‑1. The results yield a mean absolute parallax of ‑0.03 ± 0.11 mas. For 11 targets, we find no significant systematic motions of the photocenters at the level of 1–2 mas over the 10 years of this study; for one BL Lac object, we find a possible motion of 4 mas correlated with its brightness.

  19. Precision (Repeatability and Reproducibility) and Agreement of Corneal Power Measurements Obtained by Topcon KR-1W and iTrace

    PubMed Central

    Hua, Yanjun; Xu, Zequan; Qiu, Wei; Wu, Qiang

    2016-01-01

    Purpose To evaluate the repeatability and reproducibility of corneal power measurements obtained by Topcon KR-1W and iTrace, and assess the agreement with measurements obtained by Allegro Topolyzer and IOLMaster. Methods The right eyes of 100 normal subjects were prospectively scanned 3 times using all the 4 devices. Another observer performed additional 3 consecutive scans using the Topcon KR-1W and iTrace in the same session. About one week later, the first observer repeated the measurements using the Topcon KR-1W and iTrace. The steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), J0 and J45 were analyzed. Repeatability and reproducibility of measurements were evaluated by the within-subject standard deviation (Sw), coefficient of variation (CoV), test-retest repeatability (2.77Sw), and intraclass correlation coefficient (ICC). Agreements between devices were assessed using Bland-Altman analysis and 95% limits of agreement (LoA). Results Intraobserver repeatability and interobserver and intersession reproducibility of the Ks, Kf and Km showed a CoV of no more than 0.5%, a 2.77Sw of 0.70 D or less, and an ICC of no less than 0.99. However, J0 and J45 showed poor intraobserver repeatability and interobserver and intersession reproducibility (all ICCs not greater than 0.446). Statistically significant differences existed between Topcon KR-1W and IOLMaster, Topcon KR-1W and iTrace, Topcon KR-1W and Topolyzer, iTrace and Topolyzer, iTrace and IOLMaster for Ks, Kf and Km measurements (all P < 0.05). The mean differences between Topcon KR-1W, iTrace, and the other 2 devices were small. The 95% LoA were approximately 1.0 D to 1.5 D for all measurements. Conclusions The Ks, Kf and Km obtained by Topcon KR-1W and iTrace showed excellent intraobserver repeatability and interobserver and intersession reproducibility in normal eyes. The agreement between Topcon KR-1W and Topolyzer, Topcon KR-1W and IOLMaster, iTrace and Topolyzer, iTrace and IOLMaster

  20. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  1. Evaluation of Accuracy in Kinematic GPS Analyses Using a Precision Roving Antenna Platform

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sweeney, A.; Fujimoto, H.; Osaki, H.; Kawai, E.; Ichikawa, R.; Kondo, T.; Osada, Y.; Chadwell, C. D.

    2002-12-01

    Most tectonic plate boundaries and seismogenic zones of interplate earthquakes exist beneath the ocean and our knowledge on interplate coupling and on generation processes of those earthquakes remain limited. Seafloor geodesy will consequently play a very important role in improving our understanding of the physical process near plate boundaries. Seafloor positioning using a GPS/Acoustic technique is the one potential method to detect the displacement occurring at the ocean bottom. The accuracy of the technique depends on two parts: acoustic ranging in seawater, and kinematic GPS (KGPS) analysis. Accuracy of KGPS have evaluated with following way: 1) Static test: First, we carried out an experiment to confirm the capability of the KGPS analysis using GIPSY/OASIS-II for a long baseline of about 310 km. We used two GPS stations on land, one as a reference station in Sendai, and the other in Tokyo as a rover one, whose coordinate can vary from epoch to epoch. This baseline length is required for our project because the farthest seafloor transponder array is 280 km east of the nearest coastal GPS station. The 1 cm stability of the KGPS solution was achieved in the horizontal components of the 310-km baseline over the course of one day. The vertical component showed fluctuation probably due to parameters unmodeled in the analysis such as multipath and/or tropospheric delay. 2) Sea surface experiment: During cruise KT01-11 of the R/V Tansei-maru, Ocean Research Institute (ORI), University of Tokyo, around the Japan Trench in late July 2001, we deployed three precision acoustic transponders on both the Pacific plate (280 km from the coast, depth around 5450 m) and the landward slope (110 km from the coast, depth around 1600 m). We used a surface buoy with 3 GPS antennas, a motion sensor, a hydrophone, and a computer for data acquisition and control to make combined GPS/Acoustic observations. The buoy was towed about 80 m away from the R/V to reduce the impact of ship

  2. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control consortium

    PubMed Central

    2014-01-01

    We present primary results from the Sequencing Quality Control (SEQC) project, coordinated by the United States Food and Drug Administration. Examining Illumina HiSeq, Life Technologies SOLiD and Roche 454 platforms at multiple laboratory sites using reference RNA samples with built-in controls, we assess RNA sequencing (RNA-seq) performance for junction discovery and differential expression profiling and compare it to microarray and quantitative PCR (qPCR) data using complementary metrics. At all sequencing depths, we discover unannotated exon-exon junctions, with >80% validated by qPCR. We find that measurements of relative expression are accurate and reproducible across sites and platforms if specific filters are used. In contrast, RNA-seq and microarrays do not provide accurate absolute measurements, and gene-specific biases are observed, for these and qPCR. Measurement performance depends on the platform and data analysis pipeline, and variation is large for transcript-level profiling. The complete SEQC data sets, comprising >100 billion reads (10Tb), provide unique resources for evaluating RNA-seq analyses for clinical and regulatory settings. PMID:25150838

  3. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium.

    PubMed

    2014-09-01

    We present primary results from the Sequencing Quality Control (SEQC) project, coordinated by the US Food and Drug Administration. Examining Illumina HiSeq, Life Technologies SOLiD and Roche 454 platforms at multiple laboratory sites using reference RNA samples with built-in controls, we assess RNA sequencing (RNA-seq) performance for junction discovery and differential expression profiling and compare it to microarray and quantitative PCR (qPCR) data using complementary metrics. At all sequencing depths, we discover unannotated exon-exon junctions, with >80% validated by qPCR. We find that measurements of relative expression are accurate and reproducible across sites and platforms if specific filters are used. In contrast, RNA-seq and microarrays do not provide accurate absolute measurements, and gene-specific biases are observed for all examined platforms, including qPCR. Measurement performance depends on the platform and data analysis pipeline, and variation is large for transcript-level profiling. The complete SEQC data sets, comprising >100 billion reads (10Tb), provide unique resources for evaluating RNA-seq analyses for clinical and regulatory settings.

  4. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  5. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  6. Diffusion-weighted MRI of breast lesions: a prospective clinical investigation of the quantitative imaging biomarker characteristics of reproducibility, repeatability, and diagnostic accuracy.

    PubMed

    Spick, Claudio; Bickel, Hubert; Pinker, Katja; Bernathova, Maria; Kapetas, Panagiotis; Woitek, Ramona; Clauser, Paola; Polanec, Stephan H; Rudas, Margaretha; Bartsch, Rupert; Helbich, Thomas H; Baltzer, Pascal A

    2016-10-01

    Diffusion-weighted MRI (DWI) provides insights into tissue microstructure by visualization and quantification of water diffusivity. Quantitative evaluation of the apparent diffusion coefficient (ADC) obtained from DWI has been proven helpful for differentiating between malignant and benign breast lesions, for cancer subtyping in breast cancer patients, and for prediction of response to neoadjuvant chemotherapy. However, to further establish DWI of breast lesions it is important to evaluate the quantitative imaging biomarker (QIB) characteristics of reproducibility, repeatability, and diagnostic accuracy. In this intra-individual prospective clinical study 40 consecutive patients with suspicious findings, scheduled for biopsy, underwent an identical 3T breast MRI protocol of the breast on two consecutive days (>24 h). Mean ADC of target lesions was assessed (two independent readers) in four separate sessions. Reproducibility, repeatability, and diagnostic accuracy between examinations (E1, E2), readers (R1, R2), and measurements (M1, M2) were assessed with intraclass correlation coefficients (ICCs), coefficients of variation (CVs), Bland-Altman plots, and receiver operating characteristic (ROC) analysis with calculation of the area under the ROC curve (AUC). The standard of reference was either histopathology (n = 38) or imaging follow-up of up to 24 months (n = 2). Eighty breast MRI examinations (median E1-E2, 2 ± 1.7 days, 95% confidence interval (CI) 1-2 days, range 1-11 days) in 40 patients (mean age 56, standard deviation (SD) ±14) were evaluated. In 55 target lesions (mean size 25.2 ± 20.8 (SD) mm, range 6-106 mm), mean ADC values were significantly (P < 0.0001) higher in benign (1.38, 95% CI 1.27-1.49 × 10(-3)  mm(2) /s) compared with malignant (0.86, 95% CI 0.81-0.91 × 10(-) (3)  mm(2) /s) lesions. Reproducibility and repeatability showed high agreement for repeated examinations, readers, and measurements (all ICCs >0.9, CVs 3

  7. High-precision, high-accuracy ultralong-range swept-source optical coherence tomography using vertical cavity surface emitting laser light source.

    PubMed

    Grulkowski, Ireneusz; Liu, Jonathan J; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Jiang, James; Fujimoto, James G; Cable, Alex E

    2013-03-01

    We demonstrate ultralong-range swept-source optical coherence tomography (OCT) imaging using vertical cavity surface emitting laser technology. The ability to adjust laser parameters and high-speed acquisition enables imaging ranges from a few centimeters up to meters using the same instrument. We discuss the challenges of long-range OCT imaging. In vivo human-eye imaging and optical component characterization are presented. The precision and accuracy of OCT-based measurements are assessed and are important for ocular biometry and reproducible intraocular distance measurement before cataract surgery. Additionally, meter-range measurement of fiber length and multicentimeter-range imaging are reported. 3D visualization supports a class of industrial imaging applications of OCT.

  8. Accuracy and precision of total mixed rations fed on commercial dairy farms.

    PubMed

    Sova, A D; LeBlanc, S J; McBride, B W; DeVries, T J

    2014-01-01

    Despite the significant time and effort spent formulating total mixed rations (TMR), it is evident that the ration delivered by the producer and that consumed by the cow may not accurately reflect that originally formulated. The objectives of this study were to (1) determine how TMR fed agrees with or differs from TMR formulation (accuracy), (2) determine daily variability in physical and chemical characteristics of TMR delivered (precision), and (3) investigate the relationship between daily variability in ration characteristics and group-average measures of productivity [dry matter intake (DMI), milk yield, milk components, efficiency, and feed sorting] on commercial dairy farms. Twenty-two commercial freestall herds were visited for 7 consecutive days in both summer and winter months. Fresh and refusal feed samples were collected daily to assess particle size distribution, dry matter, and chemical composition. Milk test data, including yield, fat, and protein were collected from a coinciding Dairy Herd Improvement test. Multivariable mixed-effect regression models were used to analyze associations between productivity measures and daily ration variability, measured as coefficient of variation (CV) over 7d. The average TMR [crude protein=16.5%, net energy for lactation (NEL) = 1.7 Mcal/kg, nonfiber carbohydrates = 41.3%, total digestible nutrients = 73.3%, neutral detergent fiber=31.3%, acid detergent fiber=20.5%, Ca = 0.92%, p=0.42%, Mg = 0.35%, K = 1.45%, Na = 0.41%] delivered exceeded TMR formulation for NEL (+0.05 Mcal/kg), nonfiber carbohydrates (+1.2%), acid detergent fiber (+0.7%), Ca (+0.08%), P (+0.02%), Mg (+0.02%), and K (+0.04%) and underfed crude protein (-0.4%), neutral detergent fiber (-0.6%), and Na (-0.1%). Dietary measures with high day-to-day CV were average feed refusal rate (CV = 74%), percent long particles (CV = 16%), percent medium particles (CV = 7.7%), percent short particles (CV = 6.1%), percent fine particles (CV = 13%), Ca (CV = 7

  9. Using magnetic susceptibility to facilitate more rapid, reproducible and precise delineation of hydric soils in the midwestern USA

    USGS Publications Warehouse

    Grimley, D.A.; Arruda, N.K.; Bramstedt, M.W.

    2004-01-01

    Standard field indicators, currently used for hydric soil delineations [USDA-NRCS, 1998. Field indicators of hydric soils in the United States, Version 4.0. In: G.W. Hurt et al. (Ed.), United States Department of Agriculture-NRCS, Fort Worth, TX], are useful, but in some cases, they can be subjective, difficult to recognize, or time consuming to assess. Magnetic susceptibility (MS) measurements, acquired rapidly in the field with a portable meter, have great potential to help soil scientists delineate and map areas of hydric soils more precisely and objectively. At five sites in Illinois (from 5 to 15 ha in area) with contrasting soil types and glacial histories, the MS values of surface soils were measured along transects, and afterwards mapped and contoured. The MS values were found to be consistently higher in well-drained soils and lower in hydric soils, reflecting anaerobic deterioration of both detrital magnetite and soil-formed ferrimagnetics. At each site, volumetric MS values were statistically compared to field indicators to determine a critical MS value for hydric soil delineation. Such critical values range between 22??10-5 and 33??10-5 SI in silty loessal or alluvial soils in Illinois, but are as high as 61??10-5 SI at a site with fine sandy soil. A higher magnetite content and slower dissolution rate in sandy soils may explain the difference. Among sites with silty parent material, the lowest critical value (22??10-5 SI) occurs in soil with low pH (4.5-5.5) since acidic conditions are less favorable to ferrimagnetic mineral neoformation and enhance magnetite dissolution. Because of their sensitivity to parent material properties and soil pH, critical MS values must be determined on a site specific basis. The MS of studied soil samples (0-5 cm depth) is mainly controlled by neoformed ultrafine ferrimagnetics and detrital magnetite concentrations, with a minor contribution from anthropogenic fly ash. Neoformed ferrimagnetics are present in all samples

  10. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  11. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  12. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  13. Basic investigations on the performance of a normoxic polymer gel with tetrakis-hydroxy-methyl-phosphonium chloride as an oxygen scavenger: Reproducibility, accuracy, stability, and dose rate dependence

    SciTech Connect

    Bayreder, Christian; Georg, Dietmar; Moser, Ewald; Berg, Andreas

    2006-07-15

    Magnetic resonance (MR)-based polymer gel dosimetry using normoxic polymer gels, represents a new dosimetric method specially suited for high-resolution three-dimensional dosimetric problems. The aim of this study was to investigate the dose response with regard to stability, accuracy, reproducibility, and the dose rate dependence. Tetrakis-hydroxy-methyl-phosphonium chloride (THPC) is used as an oxygen scavenger, and methacrylic acid as a monomer. Accuracy, reproducibility, and dose resolution were determined for MR protocols at low spatial resolution (typical for clinical scanners), medium, and microimaging-resolution protocols at three different dose levels. The dose-response stability and preirradiation-induced variations in R2, related to the time interval between preparation and irradiation of the polymer gel, were investigated. Also postirradiation stability of the polymer gel was considered. These experiments were performed using a {sup 60}Co beam (E=1.2 MV) in a water phantom. Moreover, we investigated the dose rate dependence in the low, medium, and saturation dose region of the normoxic polymer gel using a linear accelerator at photon energy of 25 MV. MR scanning was performed on a 3 T whole body scanner (MEDSPEC 30/80, BRUKER BIOSPIN, Ettlingen, Germany) using several coils and different gradient systems adapted to the acquired spatial resolution investigated. For T2-parameter selective imaging and determination of the relaxation rate R2=1/T2, a multiple spin echo sequence with 20 equidistant echoes was used. With regard to preirradiation induced variations R2 increases significantly with the increasing time interval between the polymer gel preparation and irradiation. Only a slight increase in R2 can be observed for varying the postirradiation-time solely. The dose reproducibility at voxel volumes of about 1.4x1.4x2 mm{sup 3} is better than 2%. The accuracy strongly depends on the calibration curve. THPC represents a very effective oxygen scavenger in

  14. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  15. Note: electronic circuit for two-way time transfer via a single coaxial cable with picosecond accuracy and precision.

    PubMed

    Prochazka, Ivan; Kodet, Jan; Panek, Petr

    2012-11-01

    We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.

  16. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  17. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  18. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  19. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  20. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  1. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  2. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  3. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  4. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  5. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  6. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  7. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong.

  8. Accuracy and Reproducibility of HER2 Status in Breast Cancer Using Immunohistochemistry: A Quality Control Study in Tuscany Evaluating the Impact of Updated 2013 ASCO/CAP Recommendations.

    PubMed

    Bianchi, S; Caini, S; Paglierani, M; Saieva, C; Vezzosi, V; Baroni, G; Simoni, A; Palli, D

    2015-04-01

    The correct identification of HER2-positive cases is a key point to provide the most appropriate therapy to breast cancer (BC) patients. We aimed at investigating the reproducibility and accuracy of HER2 expression by immunohistochemistry (IHC) in a selected series of 35 invasive BC cases across the pathological anatomy laboratories in Tuscany, Italy. Unstained sections of each BC case were sent to 12 participating laboratories. Pathologists were required to score according to the Food and Drug Administration (FDA) four-tier scoring system (0, 1+, 2+, 3+). Sixteen and nineteen cases were HER2 non-amplified and amplified respectively on fluorescence in situ hybridization. Among 192 readings of the 16 HER2 non-amplified samples, 153 (79.7%) were coded as 0 or 1+, 39 (20.3%) were 2+, and none was 3+ (false positive rate 0%). Among 228 readings of the 19 HER2 amplified samples, 56 (24.6%) were scored 0 or 1+, 79 (34.6%) were 2+, and 93 (40.8%) were 3+. The average sensitivity was 75.4%, ranging between 47% and 100%, and the overall false negative rate was 24.6%. Participation of pathological anatomy laboratories performing HER2 testing by IHC in external quality assurance programs should be made mandatory, as the system is able to identify laboratories with suboptimal performance that may need technical advice. Updated 2013 ASCO/CAP recommendations should be adopted as the widening of IHC 2+ "equivocal" category would improve overall accuracy of HER2 testing, as more cases would be classified in this category and, consequently, tested with an in situ hybridisation method. PMID:25367072

  9. Optic nerve head analyser and Heidelberg retina tomograph: accuracy and reproducibility of topographic measurements in a model eye and in volunteers.

    PubMed Central

    Janknecht, P; Funk, J

    1994-01-01

    The accuracy and reproducibility of the optic nerve head analyser (ONHA) and the Heidelberg retina tomograph (HRT) were compared and the performance of the HRT in measuring fundus elevations was evaluated. The coefficient of variation of three repeated measurements in a model eye and in volunteers and the relative error in a model eye was calculated. With ONHA measurements the pooled coefficient of variation in volunteers was 9.3% in measuring cup areas and 8.4% in measuring the cup volume. In a model eye the pooled coefficient of variation was 7.6% for the parameter 'cup area' and 9.9% for the parameter 'cup volume'. The pooled relative error in the model eye was 6.6% for the parameter 'cup area' and 5.1% for the parameter 'cup volume'. With HRT measurements in volunteers the pooled coefficient of variation of both the parameters 'volume below contour' and 'volume below surface' was 6.9%. In the model eye the pooled coefficient of variation was 2.4% for the 'volume below contour' and 4.1% for the parameter 'volume below surface'. The pooled relative error in the model eye was 11.3% for the 'volume below contour' and 11% for the 'volume below surface'. The pooled relative error in measuring retinal elevations in the model eye was 3.8%. The coefficient of variation was 3.5%. The accuracies of the HRT and ONHA were similar. However, as the ONHA 'cup volume' is unreliable in patients because of the design of the ONHA whereas the HRT volume parameters are reliable it seems reasonable to assume that the HRT is superior to the ONHA. Only the HRT is capable of quantifying retinal elevations. Images PMID:7803352

  10. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  11. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  12. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  13. Accuracy and Reproducibility of Right Ventricular Quantification in Patients with Pressure and Volume Overload Using Single-Beat Three-Dimensional Echocardiography

    PubMed Central

    Knight, Daniel S.; Grasso, Agata E.; Quail, Michael A.; Muthurangu, Vivek; Taylor, Andrew M.; Toumpanakis, Christos; Caplin, Martyn E.; Coghlan, J. Gerry; Davar, Joseph

    2015-01-01

    Background The right ventricle is a complex structure that is challenging to quantify by two-dimensional (2D) echocardiography. Unlike disk summation three-dimensional (3D) echocardiography (3DE), single-beat 3DE can acquire large volumes at high volume rates in one cardiac cycle, avoiding stitching artifacts or long breath-holds. The aim of this study was to assess the accuracy and test-retest reproducibility of single-beat 3DE for quantifying right ventricular (RV) volumes in adult populations of acquired RV pressure or volume overload, namely, pulmonary hypertension (PH) and carcinoid heart disease, respectively. Three-dimensional and 2D echocardiographic indices were also compared for identifying RV dysfunction in PH. Methods A prospective cross-sectional study was performed in 100 individuals who underwent 2D echocardiography, 3DE, and cardiac magnetic resonance imaging: 49 patients with PH, 20 with carcinoid heart disease, 11 with metastatic carcinoid tumors without cardiac involvement, and 20 healthy volunteers. Two operators performed test-retest acquisition and postprocessing for inter- and intraobserver reproducibility in 20 subjects. Results: RV single-beat 3DE was attainable in 96% of cases, with mean volume rates of 32 to 45 volumes/sec. Bland-Altman analysis of all subjects (presented as mean bias ± 95% limits of agreement) revealed good agreement for end-diastolic volume (−2.3 ± 27.4 mL) and end-systolic volume (5.2 ± 19.0 mL) measured by 3DE and cardiac magnetic resonance imaging, with a tendency to underestimate stroke volume (−7.5 ± 23.6 mL) and ejection fraction (−4.6 ± 13.8%) by 3DE. Subgroup analysis demonstrated a greater bias for volumetric underestimation, particularly in healthy volunteers (end-diastolic volume, −11.9 ± 18.0 mL; stroke volume, −11.2 ± 20.2 mL). Receiver operating characteristic curve analysis showed that 3DE-derived ejection fraction was significantly superior to 2D echocardiographic

  14. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  15. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  16. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  17. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  18. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics. PMID:26315477

  19. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  20. An evaluation of the accuracy and precision of a stand-alone submersible continuous ruminal pH measurement system.

    PubMed

    Penner, G B; Beauchemin, K A; Mutsvangwa, T

    2006-06-01

    The objectives of this study were 1) to develop and evaluate the accuracy and precision of a new stand-alone submersible continuous ruminal pH measurement system called the Lethbridge Research Centre ruminal pH measurement system (LRCpH; Experiment 1); 2) to establish the accuracy and precision of a well-documented, previously used continuous indwelling ruminal pH system (CIpH) to ensure that the new system (LRCpH) was as accurate and precise as the previous system (CIpH; Experiment 2); and 3) to determine the required frequency for pH electrode standardization by comparing baseline millivolt readings of pH electrodes in pH buffers 4 and 7 after 0, 24, 48, and 72 h of ruminal incubation (Experiment 3). In Experiment 1, 6 pregnant Holstein heifers, 3 lactating, primiparous Holstein cows, and 2 Black Angus heifers were used. All experimental animals were fitted with permanent ruminal cannulas. In Experiment 2, the 3 cannulated, lactating, primiparous Holstein cows were used. In both experiments, ruminal pH was determined continuously using indwelling pH electrodes. Subsequently, mean pH values were then compared with ruminal pH values obtained using spot samples of ruminal fluid (MANpH) obtained at the same time. A correlation coefficient accounting for repeated measures was calculated and results were used to calculate the concordance correlation to examine the relationships between the LRCpH-derived values and MANpH, and the CIpH-derived values and MANpH. In Experiment 3, the 6 pregnant Holstein heifers were used along with 6 new submersible pH electrodes. In Experiments 1 and 2, the comparison of the LRCpH output (1- and 5-min averages) to MANpH had higher correlation coefficients after accounting for repeated measures (0.98 and 0.97 for 1- and 5-min averages, respectively) and concordance correlation coefficients (0.96 and 0.97 for 1- and 5-min averages, respectively) than the comparison of CIpH to MANpH (0.88 and 0.87, correlation coefficient and concordance

  1. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  2. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  3. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  4. Reproducibility blues.

    PubMed

    Pulverer, Bernd

    2015-11-12

    Research findings advance science only if they are significant, reliable and reproducible. Scientists and journals must publish robust data in a way that renders it optimally reproducible. Reproducibility has to be incentivized and supported by the research infrastructure but without dampening innovation. PMID:26538323

  5. Reproducibility blues.

    PubMed

    Pulverer, Bernd

    2015-11-12

    Research findings advance science only if they are significant, reliable and reproducible. Scientists and journals must publish robust data in a way that renders it optimally reproducible. Reproducibility has to be incentivized and supported by the research infrastructure but without dampening innovation.

  6. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  7. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  8. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  9. Evaluation of the influence of cardiac motion on the accuracy and reproducibility of longitudinal measurements and the corresponding image quality in optical frequency domain imaging: an ex vivo investigation of the optimal pullback speed.

    PubMed

    Koyama, Kohei; Yoneyama, Kihei; Mitarai, Takanobu; Kuwata, Shingo; Kongoji, Ken; Harada, Tomoo; Akashi, Yoshihiro J

    2015-08-01

    Longitudinal measurement using intravascular ultrasound is limited because the motorized pullback device assumes no cardiac motion. A newly developed intracoronary imaging modality, optical frequency domain imaging (OFDI), has higher resolution and an increased auto-pullback speed with presumably lesser susceptibility to cardiac motion artifacts during pullback for longitudinal measurement; however, it has not been fully investigated. We aimed to clarify the influence of cardiac motion on the accuracy and reproducibility of longitudinal measurements obtained using OFDI and to determine the optimal pullback speed. This ex vivo study included 31 stents deployed in the mid left anterior descending artery under phantom heartbeat and coronary flow simulation. Longitudinal stent lengths were measured twice using OFDI at three pullback speeds. Differences in stent lengths between OFDI and microscopy and between two repetitive pullbacks were assessed to determine accuracy and reproducibility. Furthermore, three-dimensional (3D) reconstruction was used for evaluating image quality. With regard to differences in stent length between OFDI and microscopy, the intraclass correlation coefficient values were 0.985, 0.994, and 0.995 at 10, 20, and 40 mm/s, respectively. With regard to reproducibility, the values were 0.995, 0.996, and 0.996 at 10, 20, and 40 mm/s, respectively. 3D reconstruction showed a superior image quality at 10 and 20 mm/s compared with that at 40 mm/s. OFDI demonstrated high accuracy and reproducibility for longitudinal stent measurements. Moreover, its accuracy and reproducibility were remarkable at a higher pullback speed. A 20-mm/s pullback speed may be optimal for clinical and research purposes.

  10. Improved Accuracy and Precision in LA-ICP-MS U-Th/Pb Dating of Zircon through the Reduction of Crystallinity Related Bias

    NASA Astrophysics Data System (ADS)

    Matthews, W.; McDonald, A.; Hamilton, B.; Guest, B.

    2015-12-01

    The accuracy of zircon U-Th/Pb ages generated by LA-ICP-MS is limited by systematic bias resulting from differences in crystallinity of the primary reference and that of the unknowns being analyzed. In general, the use of a highly crystalline primary reference will tend to bias analyses of materials of lesser crystallinity toward older ages. When dating igneous rocks, bias can be minimized by matching the crystallinity of the primary reference to that of the unknowns. However, the crystallinity of the unknowns is often not well constrained prior to ablation, as it is a function of U and Th concentration, crystallization age, and thermal history. Likewise, selecting an appropriate primary reference is impossible when dating detrital rocks where zircons with differing ages, protoliths, and thermal histories are analyzed in the same session. We investigate the causes of systematic bias using Raman spectroscopy and measurements of the ablated pit geometry. The crystallinity of five zircon reference materials with ages between 28.2 Ma and 2674 Ma was estimated using Raman spectroscopy. Zircon references varied from being highly crystalline to highly metamict, with individual reference materials plotting as distinct clusters in peak wavelength versus Full-Width Half-Maximum (FWHM) space. A strong positive correlation (R2=0.69) was found between the FWHM for the band at ~1000 cm-1 in the Raman spectrum of the zircon and its ablation rate, suggesting the degree of crystallinity is a primary control on ablation rate in zircons. A moderate positive correlation (R2=0.37) was found between ablation rate and the difference between the age determined by LA-ICP-MS and the accepted ID-TIMS age (ΔAge). We use the measured, intra-sessional relationship between ablation rate and ΔAge of secondary references to reduce systematic bias. Rapid, high-precision measurement of ablated pit geometries using an optical profilometer and custom MatLab algorithm facilitates the implementation

  11. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  12. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  13. Reproducible Science▿

    PubMed Central

    Casadevall, Arturo; Fang, Ferric C.

    2010-01-01

    The reproducibility of an experimental result is a fundamental assumption in science. Yet, results that are merely confirmatory of previous findings are given low priority and can be difficult to publish. Furthermore, the complex and chaotic nature of biological systems imposes limitations on the replicability of scientific experiments. This essay explores the importance and limits of reproducibility in scientific manuscripts. PMID:20876290

  14. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  15. Digital angiographic impulse response analysis of regional myocardial perfusion: linearity, reproducibility, accuracy, and comparison with conventional indicator dilution curve parameters in phantom and canine models.

    PubMed

    Eigler, N L; Pfaff, J M; Zeiher, A; Whiting, J S; Forrester, J S

    1989-05-01

    The system mean transit time (Tsys) of the impulse response function describing contrast material transit through the coronary circulation was determined from serial digital angiographic images. The linearity, reproducibility, and relations with regional myocardial perfusion and conventional time-density curve parameters, time to peak concentration (TPC), and exponential washout rate (k) were assessed in a dynamic flow x-ray phantom (n = 46) and in six open-chest dogs (n = 102) while coronary flow was altered by stenosis and/or hyperemic stimuli. In the phantom studies, the inverse of the system mean transit time (Tsys-1) closely predicted flow/volume (r = 0.99, slope = 0.99). In dogs, Tsys-1 was independent of the shape of the contrast bolus injection (single or double-peaked), class of contrast agent (ionic or nonionic), the type of hyperemic stimulus (dipyridamole, dipyridamole plus norepinephrine, transient total occlusion, or ionic contrast media), and was highly reproducible between adjacent myocardial regions served by the same artery (r = 0.98 +/- 0.01). There was a strong correlation between Tsys-1 and regional coronary flow for stenotic and/or hyperemic vessels (r = 0.94, distribution volume = 14.9 ml/100 g) over a wide range (0-514 ml/min/100 g). Tsys-1 performed better than conventional time-density curve parameters TPC-1 and k for predicting phantom flow/volume ratios and regional myocardial blood flow in the dog. These data suggest that both digital coronary angiography and coronary contrast transit can be modeled as linear systems and that impulse response analysis may provide accurate and reproducible estimates of regional myocardial blood flow.

  16. International normalised ratio (INR) measured on the CoaguChek S and XS compared with the laboratory for determination of precision and accuracy.

    PubMed

    Christensen, Thomas D; Larsen, Torben B; Jensen, Claus; Maegaard, Marianne; Sørensen, Benny

    2009-03-01

    Oral anticoagulation therapy is monitored by the use of international normalised ratio (INR). Patients performing self-management estimate INR using a coagulometer, but studies have been partly flawed regarding the estimated precision and accuracy. The objective was to estimate the imprecision and accuracy for two different coagulometers (CoaguChek S and XS). Twenty-four patients treated with coumarin were prospectively followed for six weeks. INR's were analyzed weekly in duplicates on both coagulometers, and compared with results from the hospital laboratory. Statistical analysis included Bland-Altman plot, 95% limits of agreement, coefficient of variance (CV), and an analysis of variance using a mixed effect model. Comparing 141 duplicate measurements (a total of 564 measurements) of INR, we found that the CoaguChek S and CoaguChek XS had a precision (CV) of 3.4% and 2.3%, respectively. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, and regarding diagnostic accuracy the CoaguChek S and CoaguChek XS deviated more than 15% from the laboratory measurements in 40% and 43% of the measurements, respectively. In conclusion, the precision of the coagulometers was found to be good, but only the CoaguChek XS had a precision within the predefined limit of 3%. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, compared to the laboratory. A large proportion of measurement of the coagulometers deviated more than 15% from the laboratory measurements. Whether this will have a clinical impact awaits further studies.

  17. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  18. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  19. Investigation of 3D glenohumeral displacements from 3D reconstruction using biplane X-ray images: Accuracy and reproducibility of the technique and preliminary analysis in rotator cuff tear patients.

    PubMed

    Zhang, Cheng; Skalli, Wafa; Lagacé, Pierre-Yves; Billuart, Fabien; Ohl, Xavier; Cresson, Thierry; Bureau, Nathalie J; Rouleau, Dominique M; Roy, André; Tétreault, Patrice; Sauret, Christophe; de Guise, Jacques A; Hagemeister, Nicola

    2016-08-01

    Rotator cuff (RC) tears may be associated with increased glenohumeral instability; however, this instability is difficult to quantify using currently available diagnostic tools. Recently, the three-dimensional (3D) reconstruction and registration method of the scapula and humeral head, based on sequences of low-dose biplane X-ray images, has been proposed for glenohumeral displacement assessment. This research aimed to evaluate the accuracy and reproducibility of this technique and to investigate its potential with a preliminary application comparing RC tear patients and asymptomatic volunteers. Accuracy was assessed using CT scan model registration on biplane X-ray images for five cadaveric shoulder specimens and showed differences ranging from 0.6 to 1.4mm depending on the direction of interest. Intra- and interobserver reproducibility was assessed through two operators who repeated the reconstruction of five subjects three times, allowing defining 95% confidence interval ranging from ±1.8 to ±3.6mm. Intraclass correlation coefficient varied between 0.84 and 0.98. Comparison between RC tear patients and asymptomatic volunteers showed differences of glenohumeral displacements, especially in the superoinferior direction when shoulder was abducted at 20° and 45°. This study thus assessed the accuracy of the low-dose 3D biplane X-ray reconstruction technique for glenohumeral displacement assessment and showed potential in biomechanical and clinical research.

  20. Precision of high-resolution multibeam echo sounding coupled with high-accuracy positioning in a shallow water coastal environment

    NASA Astrophysics Data System (ADS)

    Ernstsen, Verner B.; Noormets, Riko; Hebbeln, Dierk; Bartholomä, Alex; Flemming, Burg W.

    2006-09-01

    Over 4 years, repetitive bathymetric measurements of a shipwreck in the Grådyb tidal inlet channel in the Danish Wadden Sea were carried out using a state-of-the-art high-resolution multibeam echosounder (MBES) coupled with a real-time long range kinematic (LRK™) global positioning system. Seven measurements during a single survey in 2003 ( n=7) revealed a horizontal and vertical precision of the MBES system of ±20 and ±2 cm, respectively, at a 95% confidence level. By contrast, four annual surveys from 2002 to 2005 ( n=4) yielded a horizontal and vertical precision (at 95% confidence level) of only ±30 and ±8 cm, respectively. This difference in precision can be explained by three main factors: (1) the dismounting of the system between the annual surveys, (2) rougher sea conditions during the survey in 2004 and (3) the limited number of annual surveys. In general, the precision achieved here did not correspond to the full potential of the MBES system, as this could certainly have been improved by an increase in coverage density (soundings/m2), achievable by reducing the survey speed of the vessel. Nevertheless, precision was higher than that reported to date for earlier offshore test surveys using comparable equipment.

  1. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  2. Accuracy and reproducibility of patient-specific hemodynamic models of stented intracranial aneurysms: report on the Virtual Intracranial Stenting Challenge 2011.

    PubMed

    Cito, S; Geers, A J; Arroyo, M P; Palero, V R; Pallarés, J; Vernet, A; Blasco, J; San Román, L; Fu, W; Qiao, A; Janiga, G; Miura, Y; Ohta, M; Mendina, M; Usera, G; Frangi, A F

    2015-01-01

    Validation studies are prerequisites for computational fluid dynamics (CFD) simulations to be accepted as part of clinical decision-making. This paper reports on the 2011 edition of the Virtual Intracranial Stenting Challenge. The challenge aimed to assess the reproducibility with which research groups can simulate the velocity field in an intracranial aneurysm, both untreated and treated with five different configurations of high-porosity stents. Particle imaging velocimetry (PIV) measurements were obtained to validate the untreated velocity field. Six participants, totaling three CFD solvers, were provided with surface meshes of the vascular geometry and the deployed stent geometries, and flow rate boundary conditions for all inlets and outlets. As output, they were invited to submit an abstract to the 8th International Interdisciplinary Cerebrovascular Symposium 2011 (ICS'11), outlining their methods and giving their interpretation of the performance of each stent configuration. After the challenge, all CFD solutions were collected and analyzed. To quantitatively analyze the data, we calculated the root-mean-square error (RMSE) over uniformly distributed nodes on a plane slicing the main flow jet along its axis and normalized it with the maximum velocity on the slice of the untreated case (NRMSE). Good agreement was found between CFD and PIV with a NRMSE of 7.28%. Excellent agreement was found between CFD solutions, both untreated and treated. The maximum difference between any two groups (along a line perpendicular to the main flow jet) was 4.0 mm/s, i.e. 4.1% of the maximum velocity of the untreated case, and the average NRMSE was 0.47% (range 0.28-1.03%). In conclusion, given geometry and flow rates, research groups can accurately simulate the velocity field inside an intracranial aneurysm-as assessed by comparison with in vitro measurements-and find excellent agreement on the hemodynamic effect of different stent configurations.

  3. Assessment of the Precision and Reproducibility of Ventricular Volume, Function and Mass Measurements with Ferumoxytol-Enhanced 4D Flow MRI

    PubMed Central

    Hanneman, Kate; Kino, Aya; Cheng, Joseph Y; Alley, Marcus T; Vasanawala, Shreyas S

    2016-01-01

    Purpose To compare the precision and inter-observer agreement of ventricular volume, function and mass quantification by three-dimensional time-resolved (4D) flow MRI relative to cine steady state free precession (SSFP). Materials and Methods With research board approval, informed consent, and HIPAA compliance, 22 consecutive patients with congenital heart disease (CHD) (10 males, 6.4±4.8 years) referred for 3T ferumoxytol-enhanced cardiac MRI were prospectively recruited. Complete ventricular coverage with standard 2D short-axis cine SSFP and whole chest coverage with axial 4D flow were obtained. Two blinded radiologists independently segmented images for left ventricular (LV) and right ventricular (RV) myocardium at end systole (ES) and end diastole (ED). Statistical analysis included linear regression, ANOVA, Bland-Altman (BA) analysis, and intra-class correlation (ICC). Results Significant positive correlations were found between 4D flow and SSFP for ventricular volumes (r = 0.808–0.972, p<0.001), ejection fraction (EF) (r = 0.900–928, p<0.001), and mass (r = 0.884–0.934, p<0.001). BA relative limits of agreement for both ventricles were between −52% to 34% for volumes, −29% to 27% for EF, and −41% to 48% for mass, with wider limits of agreement for the RV compared to the LV. There was no significant difference between techniques with respect to mean square difference of ED-ES mass for either LV (F=2.05, p=0.159) or RV (F=0.625, p=0.434). Inter-observer agreement was moderate to good with both 4D flow (ICC 0.523–0.993) and SSFP (ICC 0.619–0.982), with overlapping confidence intervals. Conclusion Quantification of ventricular volume, function and mass can be accomplished with 4D flow MRI with precision and inter-observer agreement comparable to that of cine SSFP. PMID:26871420

  4. Particle-induced x-ray emission (PIXE) analysis of biological materials: Precision, accuracy and application to cancer tissues

    NASA Astrophysics Data System (ADS)

    Maenhaut, W.; De Reu, L.; Van Rinsvelt, H. A.; Cafmeyer, J.; Van Espen, P.

    1980-01-01

    An autopsy kidney, a human serum sample and NBS bovine liver were analyzed by both PIXE and instrumental neutron activation analysis (INAA). Several target preparation procedures were investigated. The reproducibility of the PIXE analysis, as determined by analyzing up to 20 targets from the same material, was of the order of 10% or better. For most elements a good agreement was obtained between PIXE and INAA, indicating that PIXE can yield data which are accurate to within 10%. The PIXE technique was also applied to cancerous and normal tissue sections of the same organ of patients, showing renal cell and other types of carcinoma. Substantial differences were often observed between the trace element concentration patterns of the cancerous and normal sections.

  5. Elusive reproducibility.

    PubMed

    Gori, Gio Batta

    2014-08-01

    Reproducibility remains a mirage for many biomedical studies because inherent experimental uncertainties generate idiosyncratic outcomes. The authentication and error rates of primary empirical data are often elusive, while multifactorial confounders beset experimental setups. Substantive methodological remedies are difficult to conceive, signifying that many biomedical studies yield more or less plausible results, depending on the attending uncertainties. Real life applications of those results remain problematic, with important exceptions for counterfactual field validations of strong experimental signals, notably for some vaccines and drugs, and for certain safety and occupational measures. It is argued that industrial, commercial and public policies and regulations could not ethically rely on unreliable biomedical results; rather, they should be rationally grounded on transparent cost-benefit tradeoffs. PMID:24882687

  6. Elusive reproducibility.

    PubMed

    Gori, Gio Batta

    2014-08-01

    Reproducibility remains a mirage for many biomedical studies because inherent experimental uncertainties generate idiosyncratic outcomes. The authentication and error rates of primary empirical data are often elusive, while multifactorial confounders beset experimental setups. Substantive methodological remedies are difficult to conceive, signifying that many biomedical studies yield more or less plausible results, depending on the attending uncertainties. Real life applications of those results remain problematic, with important exceptions for counterfactual field validations of strong experimental signals, notably for some vaccines and drugs, and for certain safety and occupational measures. It is argued that industrial, commercial and public policies and regulations could not ethically rely on unreliable biomedical results; rather, they should be rationally grounded on transparent cost-benefit tradeoffs.

  7. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  8. Reproducibility in a multiprocessor system

    DOEpatents

    Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka

    2013-11-26

    Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.

  9. Clock accuracy and precision evolve as a consequence of selection for adult emergence in a narrow window of time in fruit flies Drosophila melanogaster.

    PubMed

    Kannan, Nisha N; Vaze, Koustubh M; Sharma, Vijay Kumar

    2012-10-15

    Although circadian clocks are believed to have evolved under the action of periodic selection pressures (selection on phasing) present in the geophysical environment, there is very little rigorous and systematic empirical evidence to support this. In the present study, we examined the effect of selection for adult emergence in a narrow window of time on the circadian rhythms of fruit flies Drosophila melanogaster. Selection was imposed in every generation by choosing flies that emerged during a 1 h window of time close to the emergence peak of baseline/control flies under 12 h:12 h light:dark cycles. To study the effect of selection on circadian clocks we estimated several quantifiable features that reflect inter- and intra-individual variance in adult emergence and locomotor activity rhythms. The results showed that with increasing generations, incidence of adult emergence and activity of adult flies during the 1 h selection window increased gradually in the selected populations. Flies from the selected populations were more homogenous in their clock period, were more coherent in their phase of entrainment, and displayed enhanced accuracy and precision in their emergence and activity rhythms compared with controls. These results thus suggest that circadian clocks in D. melanogaster evolve enhanced accuracy and precision when subjected to selection for emergence in a narrow window of time.

  10. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  11. TanDEM-X IDEM precision and accuracy assessment based on a large assembly of differential GNSS measurements in Kruger National Park, South Africa

    NASA Astrophysics Data System (ADS)

    Baade, J.; Schmullius, C.

    2016-09-01

    High resolution Digital Elevation Models (DEM) represent fundamental data for a wide range of Earth surface process studies. Over the past years, the German TanDEM-X mission acquired data for a new, truly global Digital Elevation Model with unprecedented geometric resolution, precision and accuracy. First TanDEM Intermediate Digital Elevation Models (i.e. IDEM) with a geometric resolution from 0.4 to 3 arcsec have been made available for scientific purposes in November 2014. This includes four 1° × 1° tiles covering the Kruger National Park in South Africa. Here, we document the results of a local scale IDEM height accuracy validation exercise utilizing over 10,000 RTK-GNSS-based ground survey points from fourteen sites characterized by mainly pristine Savanna vegetation. The vertical precision of the ground checkpoints is 0.02 m (1σ). Selected precursor data sets (SRTMGL1, SRTM41, ASTER-GDEM2) are included in the analysis to facilitate the comparison. Although IDEM represents an intermediate product on the way to the new global TanDEM-X DEM, expected to be released in late 2016, it allows first insight into the properties of the forthcoming product. Remarkably, the TanDEM-X tiles include a number of auxiliary files providing detailed information pertinent to a user-based quality assessment. We present examples for the utilization of this information in the framework of a local scale study including the identification of height readings contaminated by water. Furthermore, this study provides evidence for the high precision and accuracy of IDEM height readings and the sensitivity to canopy cover. For open terrain, the 0.4 arcsec resolution edition (IDEM04) yields an average bias of 0.20 ± 0.05 m (95% confidence interval, Cl95), a RMSE = 1.03 m and an absolute vertical height error (LE90) of 1.5 [1.4, 1.7] m (Cl95). The corresponding values for the lower resolution IDEM editions are about the same and provide evidence for the high quality of the IDEM products

  12. The science of and advanced technology for cost-effective manufacture of high precision engineering products. Volume 4. Thermal effects on the accuracy of numerically controlled machine tool

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Barash, M. M.; Liu, C. R.

    1985-10-01

    Thermal effects on the accuracy of numerically controlled machine tools are specially important in the context of unmanned manufacture or under conditions of precision metal cutting. Removal of the operator from the direct control of the metal cutting process has created problems in terms of maintaining accuracy. The objective of this research is to study thermal effects on the accuracy of numerically controlled machine tools. The initial part of the research report is concerned with the analysis of a hypothetical machine. The thermal characteristics of this machine are studied. Numerical methods for evaluating the errors exhibited by the slides of the machine are proposed and the possibility of predicting thermally induced errors by the use of regression equations is investigated. A method for computing the workspace error is also presented. The final part is concerned with the actual measurement of errors on a modern CNC machining center. Thermal influences on the errors is the main objective of the experimental work. Thermal influences on the errors of machine tools are predictable. Techniques for determining thermal effects on machine tools at a design stage are also presented. ; Error models and prediction; Metrology; Automation.

  13. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  14. Measuring the bias, precision, accuracy, and validity of self-reported height and weight in assessing overweight and obesity status among adolescents using a surveillance system

    PubMed Central

    2015-01-01

    Background Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. Methods A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000–2002 and 2004–2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin’s concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students’ weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Results Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight

  15. An experimental analysis of accuracy and precision of a high-speed strain-gage system based on the direct-resistance method

    NASA Astrophysics Data System (ADS)

    Cappa, P.; del Prete, Z.

    1992-03-01

    An experimental study on the relative merits of using a high-speed digital-acquisition system to measure directly the strain-gage resistance, rather than using a conventional Wheatstone bridge, is carried out. Both strain gages, with a nominal resistance of 120 ohm and 1 kohm, were simulated with precision resistors, and the output signals were acquired over a time of 48 and 144 hours; furthermore, the effects in metrological performances caused by a statistical filtering were evaluated. The results show that the implementation of the statistical filtering gains a considerable improvement in gathering strain-gage-resistance readings. On the other hand, such a procedure causes, obviously, a loss of performance with regard to the acquisition rate, and therefore to the dynamic data-collecting capabilities. In any case, the intrinsic resolution of the 12-bit a/d converter, utilized in the present experimental analysis, causes a limitation for measurement accuracy in the range of hundreds microns/m.

  16. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  17. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  18. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  19. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  20. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods.

    PubMed

    He, Bin; Frey, Eric C

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed (111)In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  1. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method.

  2. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  3. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  4. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  5. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  6. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  7. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  8. Improved precision and accuracy for high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometric exact mass measurement of small molecules from the simultaneous and controlled introduction of internal calibrants via a second electrospray nebuliser.

    PubMed

    Herniman, Julie M; Bristow, Tony W T; O'Connor, Gavin; Jarvis, Jackie; Langley, G John

    2004-01-01

    The use of a second electrospray nebuliser has proved to be highly successful for exact mass measurement during high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometry (HPLC/FTICRMS). Much improved accuracy and precision of mass measurement were afforded by the introduction of the internal calibration solution, thus overcoming space charge issues due to the lack of control over relative ion abundances of the species eluting from the HPLC column. Further, issues of suppression of ionisation, observed when using a T-piece method, are addressed and this simple system has significant benefits over other more elaborate approaches providing data that compares very favourably with these other approaches. The technique is robust, flexible and transferable and can be used in conjunction with HPLC, infusion or flow injection analysis (FIA) to provide constant internal calibration signals to allow routine, accurate and precise mass measurements to be recorded.

  9. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988--1990; Water-resources investigations report 93-4025

    SciTech Connect

    Boucher, M.S.

    1994-05-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the US Department of Energy`s Yucca Mountain Project, which is an evaluation of the area to determine its suit-ability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988--90.

  10. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  11. Reproducibility of sterilized rubber impressions.

    PubMed

    Abdelaziz, Khalid M; Hassan, Ahmed M; Hodges, J S

    2004-01-01

    Impressions, dentures and other dental appliances may be contaminated with oral micro-flora or other organisms of varying pathogenicity from patient's saliva and blood. Several approaches have been tried to control the transmission of infectious organisms via dental impressions and because disinfection is less effective and has several drawbacks for impression characterization, several sterilization methods have been suggested. This study evaluated the reproducibility of rubber impressions after sterilization by different methods. Dimensional accuracy and wettability of two rubber impression materials (vinyl polysiloxane and polyether) were evaluated after sterilization by each of three well-known methods (immersion in 2% glutaraldehyde for 10 h, autoclaving and microwave radiation). Non-sterilized impressions served as control. The effect of the tray material on impression accuracy and the effect of topical surfactant on the wettability were also evaluated. One-way ANOVA with Dunnett's method was used for statistical analysis. All sterilizing methods reduced the reproducibility of rubber impressions, although not always significantly. Microwave sterilization had a small effect on both accuracy and wettability. The greater effects of the other methods could usually be overcome by using ceramic trays and by spraying impression surfaces with surfactant before pouring the gypsum mix. There was one exception: glutaraldehyde still degraded dimensional accuracy even with ceramic trays and surfactant. We conclude that a) sterilization of rubber impressions made on acrylic trays was usually associated with a degree of dimensional change; b) microwave energy seems to be a suitable technique for sterilizing rubber impressions; c) topical surfactant application helped restore wettability of sterilized impressions. PMID:15798825

  12. Accuracy and precision of reconstruction of complex refractive index in near-field single-distance propagation-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Gureyev, Timur; Mohammadi, Sara; Nesterets, Yakov; Dullin, Christian; Tromba, Giuliana

    2013-10-01

    We investigate the quantitative accuracy and noise sensitivity of reconstruction of the 3D distribution of complex refractive index, n(r)=1-δ(r)+iβ(r), in samples containing materials with different refractive indices using propagation-based phase-contrast computed tomography (PB-CT). Our present study is limited to the case of parallel-beam geometry with monochromatic synchrotron radiation, but can be readily extended to cone-beam CT and partially coherent polychromatic X-rays at least in the case of weakly absorbing samples. We demonstrate that, except for regions near the interfaces between distinct materials, the distribution of imaginary part of the refractive index, β(r), can be accurately reconstructed from a single projection image per view angle using phase retrieval based on the so-called homogeneous version of the Transport of Intensity equation (TIE-Hom) in combination with conventional CT reconstruction. In contrast, the accuracy of reconstruction of δ(r) depends strongly on the choice of the "regularization" parameter in TIE-Hom. We demonstrate by means of an instructive example that for some multi-material samples, a direct application of the TIE-Hom method in PB-CT produces qualitatively incorrect results for δ(r), which can be rectified either by collecting additional projection images at each view angle, or by utilising suitable a priori information about the sample. As a separate observation, we also show that, in agreement with previous reports, it is possible to significantly improve signal-to-noise ratio by increasing the sample-to-detector distance in combination with TIE-Hom phase retrieval in PB-CT compared to conventional ("contact") CT, with the maximum achievable gain of the order of 0.3δ /β. This can lead to improved image quality and/or reduction of the X-ray dose delivered to patients in medical imaging.

  13. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  14. The accuracy and precision of a micro computer tomography volumetric measurement technique for the analysis of in-vitro tested total disc replacements.

    PubMed

    Vicars, R; Fisher, J; Hall, R M

    2009-04-01

    Total disc replacements (TDRs) in the spine have been clinically successful in the short term, but there are concerns over long-term failure due to wear, as seen in other joint replacements. Simulators have been used to investigate the wear of TDRs, but only gravimetric measurements have been used to assess material loss. Micro computer tomography (microCT) has been used for volumetric measurement of explanted components but has yet to be used for in-vitro studies with the wear typically less than < 20 mm3 per 10(6) cycles. The aim of this study was to compare microCT volume measurements with gravimetric measurements and to assess whether microCT can quantify wear volumes of in-vitro tested TDRs. microCT measurements of TDR polyethylene cores were undertaken and the results compared with gravimetric assessments. The effects of repositioning, integration time, and scan resolution were investigated. The best volume measurement resolution was found to be +/- 3 mm3, at least three orders of magnitude greater than those determined for gravimetric measurements. In conclusion, the microCT measurement technique is suitable for quantifying in-vitro TDR polyethylene wear volumes and can provide qualitative data (e.g. wear location), and also further quantitative data (e.g. height loss), assisting comparisons with in-vivo and ex-vivo data. It is best used alongside gravimetric measurements to maintain the high level of precision that these measurements provide.

  15. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  16. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  17. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  18. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  19. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  20. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  1. Acceptability, Precision and Accuracy of 3D Photonic Scanning for Measurement of Body Shape in a Multi-Ethnic Sample of Children Aged 5-11 Years: The SLIC Study

    PubMed Central

    Wells, Jonathan C. K.; Stocks, Janet; Bonner, Rachel; Raywood, Emma; Legg, Sarah; Lee, Simon; Treleaven, Philip; Lum, Sooky

    2015-01-01

    Background Information on body size and shape is used to interpret many aspects of physiology, including nutritional status, cardio-metabolic risk and lung function. Such data have traditionally been obtained through manual anthropometry, which becomes time-consuming when many measurements are required. 3D photonic scanning (3D-PS) of body surface topography represents an alternative digital technique, previously applied successfully in large studies of adults. The acceptability, precision and accuracy of 3D-PS in young children have not been assessed. Methods We attempted to obtain data on girth, width and depth of the chest and waist, and girth of the knee and calf, manually and by 3D-PS in a multi-ethnic sample of 1484 children aged 5–11 years. The rate of 3D-PS success, and reasons for failure, were documented. Precision and accuracy of 3D-PS were assessed relative to manual measurements using the methods of Bland and Altman. Results Manual measurements were successful in all cases. Although 97.4% of children agreed to undergo 3D-PS, successful scans were only obtained in 70.7% of these. Unsuccessful scans were primarily due to body movement, or inability of the software to extract shape outputs. The odds of scan failure, and the underlying reason, differed by age, size and ethnicity. 3D-PS measurements tended to be greater than those obtained manually (p<0.05), however ranking consistency was high (r2>0.90 for most outcomes). Conclusions 3D-PS is acceptable in children aged ≥5 years, though with current hardware/software, and body movement artefacts, approximately one third of scans may be unsuccessful. The technique had poorer technical success than manual measurements, and had poorer precision when the measurements were viable. Compared to manual measurements, 3D-PS showed modest average biases but acceptable limits of agreement for large surveys, and little evidence that bias varied substantially with size. Most of the issues we identified could be

  2. Color accuracy and reproducibility in whole slide imaging scanners

    NASA Astrophysics Data System (ADS)

    Shrestha, Prarthana; Hulsken, Bas

    2014-03-01

    In this paper, we propose a work-flow for color reproduction in whole slide imaging (WSI) scanners such that the colors in the scanned images match to the actual slide color and the inter scanner variation is minimum. We describe a novel method of preparation and verification of the color phantom slide, consisting of a standard IT8- target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several ICC compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color-space. Based on the quality of color reproduction in histopathology tissue slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed work-ow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We measure objective color performance using CIE-DeltaE2000 metric, where DeltaE values below 1 is considered imperceptible. Our evaluation 14 phantom slides, manufactured according to the proposed method, show an average inter-slide color difference below 1 DeltaE. The proposed work-flow is implemented and evaluated in 35 Philips Ultra Fast Scanners (UFS). The results show that the average color difference between a scanner and the reference is 3.5 DeltaE, and among the scanners is 3.1 DeltaE. The improvement on color performance upon using the proposed method is apparent on the visual color quality of the tissues scans.

  3. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  4. Reproducible research in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  5. Magnetogastrography (MGG) Reproducibility Assessments

    NASA Astrophysics Data System (ADS)

    de la Roca-Chiapas, J. M.; Córdova, T.; Hernández, E.; Solorio, S.; Solís Ortiz, S.; Sosa, M.

    2006-09-01

    Seven healthy subjects underwent a magnetic pulse of 32 mT for 17 ms, seven times in 90 minutes. The procedure was repeated one and two weeks later. Assessments of the gastric emptying were carried out for each one of the measurements and a statistical analysis of ANOVA was performed for every group of data. The gastric emptying time was 19.22 ± 5 min. Reproducibility estimation was above 85%. Therefore, magnetogastrography seems to be an excellent technique to be implemented in routine clinical trials.

  6. Precise, reproducible nano-domain engineering in lithium niobate crystals

    SciTech Connect

    Boes, Andreas Sivan, Vijay; Ren, Guanghui; Yudistira, Didit; Mitchell, Arnan; Mailis, Sakellaris; Soergel, Elisabeth

    2015-07-13

    We present a technique for domain engineering the surface of lithium niobate crystals with features as small as 100 nm. A film of chromium (Cr) is deposited on the lithium niobate surface and patterned using electron beam lithography and lift-off and then irradiated with a wide diameter beam of intense visible laser light. The regions patterned with chromium are domain inverted while the uncoated regions are not affected by the irradiation. With the ability to realize nanoscale surface domains, this technique could offer an avenue for fabrication of nano-photonic and phononic devices.

  7. Precise, reproducible nano-domain engineering in lithium niobate crystals

    NASA Astrophysics Data System (ADS)

    Boes, Andreas; Sivan, Vijay; Ren, Guanghui; Yudistira, Didit; Mailis, Sakellaris; Soergel, Elisabeth; Mitchell, Arnan

    2015-07-01

    We present a technique for domain engineering the surface of lithium niobate crystals with features as small as 100 nm. A film of chromium (Cr) is deposited on the lithium niobate surface and patterned using electron beam lithography and lift-off and then irradiated with a wide diameter beam of intense visible laser light. The regions patterned with chromium are domain inverted while the uncoated regions are not affected by the irradiation. With the ability to realize nanoscale surface domains, this technique could offer an avenue for fabrication of nano-photonic and phononic devices.

  8. Opening Reproducible Research

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  9. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy

    PubMed Central

    Akintola, Abimbola A.; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C.; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93–56.45%) and within (range 0.75–9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was −2.8, −1.0, and −0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed

  10. Using measurements of muscle color, pH, and electrical impedance to augment the current USDA beef quality grading standards and improve the accuracy and precision of sorting carcasses into palatability groups.

    PubMed

    Wulf, D M; Page, J K

    2000-10-01

    This research was conducted to determine whether objective measures of muscle color, muscle pH, and(or) electrical impedance are useful in segregating palatable beef from unpalatable beef, and to determine whether the current USDA quality grading standards for beef carcasses could be revised to improve their effectiveness at distinguishing palatable from unpalatable beef. One hundred beef carcasses were selected from packing plants in Texas, Illinois, and Ohio to represent the full range of muscle color observed in the U.S. beef carcass population. Steaks from these 100 carcasses were used to determine shear force on eight cooked beef muscles and taste panel ratings on three cooked beef muscles. It was discovered that the darkest-colored 20 to 25% of the beef carcasses sampled were less palatable and considerably less consistent than the other 75 to 80% sampled. Marbling score, by itself, explained 12% of the variation in beef palatability; hump height, by itself, explained 8% of the variation in beef palatability; measures of muscle color or pH, by themselves, explained 15 to 23% of the variation in beef palatability. When combined together, marbling score, hump height, and some measure of muscle color or pH explained 36 to 46% of the variation in beef palatability. Alternative quality grading systems were proposed to improve the accuracy and precision of sorting carcasses into palatability groups. The two proposed grading systems decreased palatability variation by 29% and 39%, respectively, within the Choice grade and decreased palatability variation by 37% and 12%, respectively, within the Select grade, when compared with current USDA standards. The percentage of unpalatable Choice carcasses was reduced from 14% under the current USDA grading standards to 4% and 1%, respectively, for the two proposed systems. The percentage of unpalatable Select carcasses was reduced from 36% under the current USDA standards to 7% and 29%, respectively, for the proposed systems

  11. Reproducing in cities.

    PubMed

    Mace, Ruth

    2008-02-01

    Reproducing in cities has always been costly, leading to lower fertility (that is, lower birth rates) in urban than in rural areas. Historically, although cities provided job opportunities, initially residents incurred the penalty of higher infant mortality, but as mortality rates fell at the end of the 19th century, European birth rates began to plummet. Fertility decline in Africa only started recently and has been dramatic in some cities. Here it is argued that both historical and evolutionary demographers are interpreting fertility declines across the globe in terms of the relative costs of child rearing, which increase to allow children to outcompete their peers. Now largely free from the fear of early death, postindustrial societies may create an environment that generates runaway parental investment, which will continue to drive fertility ever lower.

  12. Reproducing Actual Morphology of Planetary Lava Flows

    NASA Astrophysics Data System (ADS)

    Miyamoto, H.; Sasaki, S.

    1996-03-01

    Assuming that lava flows behave as non-isothermal laminar Bingham fluids, we developed a numerical code of lava flows. We take the self gravity effects and cooling mechanisms into account. The calculation method is a kind of cellular automata using a reduced random space method, which can eliminate the mesh shape dependence. We can calculate large scale lava flows precisely without numerical instability and reproduce morphology of actual lava flows.

  13. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  14. Reproducible Experiment Platform

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Rogozhnikov, Alex; Baranov, Alexander; Khairullin, Egor; Ustyuzhanin, Andrey

    2015-12-01

    Data analysis in fundamental sciences nowadays is an essential process that pushes frontiers of our knowledge and leads to new discoveries. At the same time we can see that complexity of those analyses increases fast due to a) enormous volumes of datasets being analyzed, b) variety of techniques and algorithms one have to check inside a single analysis, c) distributed nature of research teams that requires special communication media for knowledge and information exchange between individual researchers. There is a lot of resemblance between techniques and problems arising in the areas of industrial information retrieval and particle physics. To address those problems we propose Reproducible Experiment Platform (REP), a software infrastructure to support collaborative ecosystem for computational science. It is a Python based solution for research teams that allows running computational experiments on shared datasets, obtaining repeatable results, and consistent comparisons of the obtained results. We present some key features of REP based on case studies which include trigger optimization and physics analysis studies at the LHCb experiment.

  15. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  16. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  17. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  18. Development of a novel articulator that reproduced jaw movement with six-degree-of-freedom.

    PubMed

    Nishigawa, Keisuke; Satsuma, Toyoko; Shigemoto, Shuji; Bando, Eiichi; Nakano, Masanori; Ishida, Osamu

    2007-06-01

    A novel robotic articulator that reproduced a six-degree-of-freedom jaw movement was developed and tested. A precise six-axis micro-positioning stage was employed for this articulator. A high-resolution jaw-tracking device measured the functional jaw movement of the patient and a six-axis micro-positioning stage reproduced recorded jaw movement data. A full veneer crown restoration was fabricated with this articulator system. A working cast was mounted on the positioning stage of the articulator. An occlusal table with soft wax was attached on the cast tooth die, and the jaw movements were reproduced to create a functionally generated path on the occlusal table. The finished occlusal record was used to obtain the wax pattern for the crown. In this subject, no intra-oral occlusal adjustment was necessary for setting the finished full veneer crown. Since this articulator could perform a precise reproduction of the dynamic jaw motion during the functional jaw movement, this system has potential to improve accuracy of the prosthetic teeth occlusion.

  19. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  20. The presentation of plastic surgery visual data from 1816 to 1916: The evolution of reproducible results.

    PubMed

    Freshwater, M Felix

    2016-09-01

    All scientific data should be presented with sufficient accuracy and precision so that they can be both analyzed properly and reproduced. Visual data are the foundation upon which plastic surgeons advance knowledge. We use visual data to achieve reproducible results by discerning details of procedures and differences between pre- and post-surgery images. This review highlights how the presentation of visual data evolved from 1816, when Joseph Carpue published his book on nasal reconstruction to 1916, when Captain Harold Gillies began to treat over 2000 casualties from the Battle of the Somme. It shows the frailties of human nature that led some authors such as Carl von Graefe, Joseph Pancoast and Thomas Mutter to record inaccurate methods or results that could not be reproduced, and what measures other authors such as Eduard Zeis, Johann Dieffenbach, and Gurdon Buck took to affirm the accuracy of their results. It shows how photography gradually supplanted illustration as a reference standard. Finally, it shows the efforts that some authors and originators took to authenticate and preserve their visual data in what can be considered the forerunners of clinical registries.

  1. Data Identifiers and Citations Enable Reproducible Science

    NASA Astrophysics Data System (ADS)

    Tilmes, C.

    2011-12-01

    Modern science often involves data processing with tremendous volumes of data. Keeping track of that data has been a growing challenge for data center. Researchers who access and use that data don't always reference and cite their data sources adequately for consumers of their research to follow their methodology or reproduce their analyses or experiments. Recent research has led to recommendations for good identifiers and citations that can help address this problem. This paper will describe some of the best practices in data identifiers, reference and citation. Using a simplified example scenario based on a long term remote sensing satellite mission, it will explore issues in identifying dynamic data sets and the importance of good data citations for reproducibility. It will describe the difference between granule and collection level identifiers, using UUIDs and DOIs to illustrate some recommendations for developing identifiers and assigning them during data processing. As data processors create data products, the provenance of the input products and precise steps that led to their creation are recorded and published for users of the data to see. As researchers access the data from an archive, they can use the provenance to help understand the genesis of the data, which could have effects on their usage of the data. By citing the data on publishing their research, others can retrieve the precise data used in their research and reproduce the analyses and experiments to confirm the results. Describing the experiment to a sufficient extent to reproduce the research enforces a formal approach that lends credibility to the results, and ultimately, to the policies of decision makers depending on that research.

  2. Interference examiner for certification of precision autocollimators

    SciTech Connect

    Martynov, V.T.; Brda, V.A.; Likhttsinder, B.A.; Shestopalov, Y.N.

    1985-05-01

    Regular polygonal prisms together with an autocollimator are usually employed as standard means in the study and certification of angle-measuring instruments and apparatus; the prisims, in turn, must be certified with high accuracy. The interference examiner employs an optical system similar to that of the examiner in the new State primary standard plane-angle unit. The examiner is based on a twin-wave Michelson interferometer. Instead of a separate scale, the interference examiner uses two vertical marks applied directly to the end reflectors. A comparison was made of the rotation angles of the mirror in the range of 0-1/sup 0/ as reproduced by the examiner alpha /sub T/ (trigonometric method) and as measured by a UDP-025 precision angle-measuring instrument alpha /sub g/ (goniometric method). Processing of the obtained measurement results showed that the difference between alpha /sub T/ and alpha /sub g/ does not exceed the measurement error of the UDP-025.

  3. On the validity of 3D polymer gel dosimetry: I. Reproducibility study

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Jan; De Deene, Yves

    2013-01-01

    The intra- and inter-batch accuracy and precision of MRI (polyacrylamide gelatin gel fabricated at atmospheric conditions) polymer gel dosimeters are assessed in full 3D. In the intra-batch study, eight spherical flasks were filled with the same polymer gel along with a set of test tubes that served as calibration phantoms. In the inter-batch study, the eight spherical flasks were filled with different batches of gel. For each spherical phantom, a separate set of calibration phantoms was used. The spherical phantoms were irradiated using a three-field coplanar beam configuration in a very reproducible manner. The calibration phantoms were irradiated to known doses to obtain a dose-R2 calibration plot which was applied on the corresponding R2 maps of all spherical phantoms on an individual basis. The intra-batch study showed high dosimetric precision (3.1%) notwithstanding poor accuracy (mean dose discrepancies up to 13.0%). In the inter-batch study, a similar dosimetric precision (4.3%) and accuracy (mean dose discrepancies up to 13.7%) were found. The poor dosimetric accuracy was attributed to a systematic fault that was related to the calibration method. Therefore, the dose maps were renormalized using an independent ion chamber dose measurement. It is illustrated that with this renormalization, excellent agreement between the gel measured and TPS calculated 3D dose maps is achievable: 97% and 99% of the pixels meet the 3%/3 mm criteria for the intra- and inter-batch experiments, respectively. However renormalization will result in significant dose deviations inside a realistically sized anthropomorphic phantom as will be shown in a concurrent paper. Both authors contributed equally to this study.

  4. Open Science and Research Reproducibility

    PubMed Central

    Munafò, Marcus

    2016-01-01

    Many scientists, journals and funders are concerned about the low reproducibility of many scientific findings. One approach that may serve to improve the reliability and robustness of research is open science. Here I argue that the process of pre-registering study protocols, sharing study materials and data, and posting preprints of manuscripts may serve to improve quality control procedures at every stage of the research pipeline, and in turn improve the reproducibility of published work. PMID:27350794

  5. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  6. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  7. Towards Reproducibility in Computational Hydrology

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    The ability to reproduce published scientific findings is a foundational principle of scientific research. Independent observation helps to verify the legitimacy of individual findings; build upon sound observations so that we can evolve hypotheses (and models) of how catchments function; and move them from specific circumstances to more general theory. The rise of computational research has brought increased focus on the issue of reproducibility across the broader scientific literature. This is because publications based on computational research typically do not contain sufficient information to enable the results to be reproduced, and therefore verified. Given the rise of computational analysis in hydrology over the past 30 years, to what extent is reproducibility, or a lack thereof, a problem in hydrology? Whilst much hydrological code is accessible, the actual code and workflow that produced and therefore documents the provenance of published scientific findings, is rarely available. We argue that in order to advance and make more robust the process of hypothesis testing and knowledge creation within the computational hydrological community, we need to build on from existing open data initiatives and adopt common standards and infrastructures to: first make code re-useable and easy to find through consistent use of metadata; second, publish well documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; finally, use unique persistent identifiers (e.g. DOIs) to reference re-useable and reproducible code, thereby clearly showing the provenance of published scientific findings. Whilst extra effort is require to make work reproducible, there are benefits to both the individual and the broader community in doing so, which will improve the credibility of the science in the face of the need for societies to adapt to changing hydrological environments.

  8. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  9. Latent fingermark pore area reproducibility.

    PubMed

    Gupta, A; Buckley, K; Sutton, R

    2008-08-01

    The study of the reproducibility of friction ridge pore detail in fingermarks is a measure of their usefulness in personal identification. Pore area in latent prints developed using cyanoacrylate and ninhydrin were examined and measured by photomicrography using appropriate software tools. The data were analysed statistically and the results showed that pore area is not reproducible in developed latent prints, using either of the development techniques. The results add further support to the lack of reliability of pore area in personal identification. PMID:18617339

  10. Rotary head type reproducing apparatus

    DOEpatents

    Takayama, Nobutoshi; Edakubo, Hiroo; Kozuki, Susumu; Takei, Masahiro; Nagasawa, Kenichi

    1986-01-01

    In an apparatus of the kind arranged to reproduce, with a plurality of rotary heads, an information signal from a record bearing medium having many recording tracks which are parallel to each other with the information signal recorded therein and with a plurality of different pilot signals of different frequencies also recorded one by one, one in each of the recording tracks, a plurality of different reference signals of different frequencies are simultaneously generated. A tracking error is detected by using the different reference signals together with the pilot signals which are included in signals reproduced from the plurality of rotary heads.

  11. Quantification of long chain polyunsaturated fatty acids by gas chromatography. Evaluation of factors affecting accuracy.

    PubMed

    Schreiner, Matthias

    2005-11-18

    The accurate and reproducible analysis of long-chain polyunsaturated fatty acids (PUFA) is of growing importance. Especially for labeling purposes, clear guidelines are needed in order to achieve optimum accuracy. Since calibration standards cannot be used for method validation due to the instability of PUFAs, there is no direct way to check for the absence of systematic errors. In this study the sources of error that weaken the accuracy were evaluated using theoretical considerations and calibration standards with corrected composition. It was demonstrated that the key role for optimum accuracy lies in the optimization of the split injection system. Even when following the instructions outlined in the official methods of the American Oil Chemist's Society (AOCS), systematic errors of more than 7% can arise. Clear guidelines regarding system calibration and selection of appropriate internal standards (IS) can improve precision and accuracy significantly.

  12. New multi-station and multi-decadal trend data on precipitable water. Recipe to match FTIR retrievals from NDACC long-time records to radio sondes within 1 mm accuracy/precision

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Borsdorff, T.; Rettinger, M.; Camy-Peyret, C.; Demoulin, P.; Duchatelet, P.; Mahieu, E.

    2009-04-01

    We present an original optimum strategy for retrieval of precipitable water from routine ground-based mid-infrared FTS measurements performed at a number globally distributed stations within the NDACC network. The strategy utilizes FTIR retrievals which are set in a way to match standard radio sonde operations. Thereby, an unprecedented accuracy and precision for measurements of precipitable water can be demonstrated: the correlation between Zugspitze FTIR water vapor columns from a 3 months measurement campaign with total columns derived from coincident radio sondes shows a regression coefficient of R = 0.988, a bias of 0.05 mm, a standard deviation of 0.28 mm, an intercept of 0.01 mm, and a slope of 1.01. This appears to be even better than what can be achieved with state-of-the-art micro wave techniques, see e.g., Morland et al. (2006, Fig. 9 therein). Our approach is based upon a careful selection of spectral micro windows, comprising a set of both weak and strong water vapor absorption lines between 839.4 - 840.6 cm-1, 849.0 - 850.2 cm-1, and 852.0 - 853.1 cm-1, which is not contaminated by interfering absorptions of any other trace gases. From existing spectroscopic line lists, a careful selection of the best available parameter set was performed, leading to nearly perfect spectral fits without significant forward model parameter errors. To set up the FTIR water vapor profile inversion, a set of FTIR measurements and coincident radio sondes has been utilized. To eliminate/minimize mismatch in time and space, the Tobin best estimate of the state of the atmosphere principle has been applied to the radio sondes. This concept uses pairs of radio sondes launched with a 1-hour separation, and derives the gradient from the two radio sonde measurements, in order to construct a virtual PTU profile for a certain time and location. Coincident FTIR measurements of water vapor columns (two hour mean values) have then been matched to the water columns obtained by

  13. Reproducibility of UAV-based earth topography reconstructions based on Structure-from-Motion algorithms

    NASA Astrophysics Data System (ADS)

    Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof

    2016-05-01

    Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.

  14. Reproducible Bioinformatics Research for Biologists

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  15. Reproducible Research in Computational Science

    PubMed Central

    Peng, Roger D.

    2012-01-01

    Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible. PMID:22144613

  16. Reproducible research in computational science.

    PubMed

    Peng, Roger D

    2011-12-01

    Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.

  17. Regional cerebral blood flow utilizing the gamma camera and xenon inhalation: reproducibility and clinical applications

    SciTech Connect

    Fox, R.A.; Knuckey, N.W.; Fleay, R.F.; Stokes, B.A.; Van der Schaaf, A.; Surveyor, I.

    1985-11-01

    A modified collimator and standard gamma camera have been used to measure regional cerebral blood flow following inhalation of radioactive xenon. The collimator and a simplified analysis technique enables excellent statistical accuracy to be achieved with acceptable precision in the measurement of grey matter blood flow. The validity of the analysis was supported by computer modelling and patient measurements. Sixty-one patients with subarachnoid hemorrhage, cerebrovascular disease or dementia were retested to determine the reproducibility of our method. The measured coefficient of variation was 6.5%. Of forty-six patients who had a proven subarachnoid hemorrhage, 15 subsequently developed cerebral ischaemia. These showed a CBF of 42 +/- 6 ml X minute-1 X 100 g brain-1 compared with 49 +/- 11 ml X minute-1 X 100 g brain-1 for the remainder. There is evidence that decreasing blood flow and low initial flow correlate with the subsequent onset of cerebral ischemia.

  18. Performance reproducibility index for classification

    PubMed Central

    Yousefi, Mohammadmahdi R.; Dougherty, Edward R.

    2012-01-01

    Motivation: A common practice in biomarker discovery is to decide whether a large laboratory experiment should be carried out based on the results of a preliminary study on a small set of specimens. Consideration of the efficacy of this approach motivates the introduction of a probabilistic measure, for whether a classifier showing promising results in a small-sample preliminary study will perform similarly on a large independent sample. Given the error estimate from the preliminary study, if the probability of reproducible error is low, then there is really no purpose in substantially allocating more resources to a large follow-on study. Indeed, if the probability of the preliminary study providing likely reproducible results is small, then why even perform the preliminary study? Results: This article introduces a reproducibility index for classification, measuring the probability that a sufficiently small error estimate on a small sample will motivate a large follow-on study. We provide a simulation study based on synthetic distribution models that possess known intrinsic classification difficulties and emulate real-world scenarios. We also set up similar simulations on four real datasets to show the consistency of results. The reproducibility indices for different distributional models, real datasets and classification schemes are empirically calculated. The effects of reporting and multiple-rule biases on the reproducibility index are also analyzed. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routine and error estimation methods. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi12a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22954625

  19. Reproducibility of airway wall thickness measurements

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Kuhnigk, Jan-Martin; Krass, Stefan; Owsijewitsch, Michael; de Hoop, Bartjan; Peitgen, Heinz-Otto

    2010-03-01

    Airway remodeling and accompanying changes in wall thickness are known to be a major symptom of chronic obstructive pulmonary disease (COPD), associated with reduced lung function in diseased individuals. Further investigation of this disease as well as monitoring of disease progression and treatment effect demand for accurate and reproducible assessment of airway wall thickness in CT datasets. With wall thicknesses in the sub-millimeter range, this task remains challenging even with today's high resolution CT datasets. To provide accurate measurements, taking partial volume effects into account is mandatory. The Full-Width-at-Half-Maximum (FWHM) method has been shown to be inappropriate for small airways1,2 and several improved algorithms for objective quantification of airway wall thickness have been proposed.1-8 In this paper, we describe an algorithm based on a closed form solution proposed by Weinheimer et al.7 We locally estimate the lung density parameter required for the closed form solution to account for possible variations of parenchyma density between different lung regions, inspiration states and contrast agent concentrations. The general accuracy of the algorithm is evaluated using basic tubular software and hardware phantoms. Furthermore, we present results on the reproducibility of the algorithm with respect to clinical CT scans, varying reconstruction kernels, and repeated acquisitions, which is crucial for longitudinal observations.

  20. Reproducibility of NIF hohlraum measurements

    NASA Astrophysics Data System (ADS)

    Moody, J. D.; Ralph, J. E.; Turnbull, D. P.; Casey, D. T.; Albert, F.; Bachmann, B. L.; Doeppner, T.; Divol, L.; Grim, G. P.; Hoover, M.; Landen, O. L.; MacGowan, B. J.; Michel, P. A.; Moore, A. S.; Pino, J. E.; Schneider, M. B.; Tipton, R. E.; Smalyuk, V. A.; Strozzi, D. J.; Widmann, K.; Hohenberger, M.

    2015-11-01

    The strategy of experimentally ``tuning'' the implosion in a NIF hohlraum ignition target towards increasing hot-spot pressure, areal density of compressed fuel, and neutron yield relies on a level of experimental reproducibility. We examine the reproducibility of experimental measurements for a collection of 15 identical NIF hohlraum experiments. The measurements include incident laser power, backscattered optical power, x-ray measurements, hot-electron fraction and energy, and target characteristics. We use exact statistics to set 1-sigma confidence levels on the variations in each of the measurements. Of particular interest is the backscatter and laser-induced hot-spot locations on the hohlraum wall. Hohlraum implosion designs typically include variability specifications [S. W. Haan et al., Phys. Plasmas 18, 051001 (2011)]. We describe our findings and compare with the specifications. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  1. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  2. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  3. Cloning to reproduce desired genotypes.

    PubMed

    Westhusin, M E; Long, C R; Shin, T; Hill, J R; Looney, C R; Pryor, J H; Piedrahita, J A

    2001-01-01

    Cloned sheep, cattle, goats, pigs and mice have now been produced using somatic cells for nuclear transplantation. Animal cloning is still very inefficient with on average less than 10% of the cloned embryos transferred resulting in a live offspring. However successful cloning of a variety of different species and by a number of different laboratory groups has generated tremendous interest in reproducing desired genotypes. Some of these specific genotypes represent animal cell lines that have been genetically modified. In other cases there is a significant demand for cloning animals characterized by their inherent genetic value, for example prize livestock, household pets and rare or endangered species. A number of different variables may influence the ability to reproduce a specific genotype by cloning. These include species, source of recipient ova, cell type of nuclei donor, treatment of donor cells prior to nuclear transfer, and the techniques employed for nuclear transfer. At present, there is no solid evidence that suggests cloning will be limited to only a few specific animals, and in fact, most data collected to date suggests cloning will be applicable to a wide variety of different animals. The ability to reproduce any desired genotype by cloning will ultimately depend on the amount of time and resources invested in research.

  4. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  5. Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.

    NASA Astrophysics Data System (ADS)

    Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof

    2014-05-01

    A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.

  6. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  7. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  8. Ultra precision machining

    NASA Astrophysics Data System (ADS)

    Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas

    1990-05-01

    There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.

  9. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  10. Positioning accuracy of cone-beam computed tomography in combination with a HexaPOD robot treatment table

    SciTech Connect

    Meyer, Juergen . E-mail: juergen.meyer@canterbury.ac.nz; Wilbert, Juergen; Baier, Kurt; Guckenberger, Matthias; Richter, Anne; Sauer, Otto; Flentje, Michael

    2007-03-15

    Purpose: To scrutinize the positioning accuracy and reproducibility of a commercial hexapod robot treatment table (HRTT) in combination with a commercial cone-beam computed tomography system for image-guided radiotherapy (IGRT). Methods and Materials: The mechanical stability of the X-ray volume imaging (XVI) system was tested in terms of reproducibility and with a focus on the moveable parts, i.e., the influence of kV panel and the source arm on the reproducibility and accuracy of both bone and gray value registration using a head-and-neck phantom. In consecutive measurements the accuracy of the HRTT for translational, rotational, and a combination of translational and rotational corrections was investigated. The operational range of the HRTT was also determined and analyzed. Results: The system performance of the XVI system alone was very stable with mean translational and rotational errors of below 0.2 mm and below 0.2{sup o}, respectively. The mean positioning accuracy of the HRTT in combination with the XVI system summarized over all measurements was below 0.3 mm and below 0.3{sup o} for translational and rotational corrections, respectively. The gray value match was more accurate than the bone match. Conclusion: The XVI image acquisition and registration procedure were highly reproducible. Both translational and rotational positioning errors can be corrected very precisely with the HRTT. The HRTT is therefore well suited to complement cone-beam computed tomography to take full advantage of position correction in six degrees of freedom for IGRT. The combination of XVI and the HRTT has the potential to improve the accuracy of high-precision treatments.

  11. Evaluation of guidewire path reproducibility.

    PubMed

    Schafer, Sebastian; Hoffmann, Kenneth R; Noël, Peter B; Ionita, Ciprian N; Dmochowski, Jacek

    2008-05-01

    The number of minimally invasive vascular interventions is increasing. In these interventions, a variety of devices are directed to and placed at the site of intervention. The device used in almost all of these interventions is the guidewire, acting as a monorail for all devices which are delivered to the intervention site. However, even with the guidewire in place, clinicians still experience difficulties during the interventions. As a first step toward understanding these difficulties and facilitating guidewire and device guidance, we have investigated the reproducibility of the final paths of the guidewire in vessel phantom models on different factors: user, materials and geometry. Three vessel phantoms (vessel diameters approximately 4 mm) were constructed having tortuousity similar to the internal carotid artery from silicon tubing and encased in Sylgard elastomer. Several trained users repeatedly passed two guidewires of different flexibility through the phantoms under pulsatile flow conditions. After the guidewire had been placed, rotational c-arm image sequences were acquired (9 in. II mode, 0.185 mm pixel size), and the phantom and guidewire were reconstructed (512(3), 0.288 mm voxel size). The reconstructed volumes were aligned. The centerlines of the guidewire and the phantom vessel were then determined using region-growing techniques. Guidewire paths appear similar across users but not across materials. The average root mean square difference of the repeated placement was 0.17 +/- 0.02 mm (plastic-coated guidewire), 0.73 +/- 0.55 mm (steel guidewire) and 1.15 +/- 0.65 mm (steel versus plastic-coated). For a given guidewire, these results indicate that the guidewire path is relatively reproducible in shape and position.

  12. Precision orbit determination for Topex

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Schutz, B. E.; Ries, J. C.; Shum, C. K.

    1990-01-01

    The ability of radar altimeters to measure the distance from a satellite to the ocean surface with a precision of the order of 2 cm imposes unique requirements for the orbit determination accuracy. The orbit accuracy requirements will be especially demanding for the joint NASA/CNES Ocean Topography Experiment (Topex/Poseidon). For this mission, a radial orbit accuracy of 13 centimeters will be required for a mission period of three to five years. This is an order of magnitude improvement in the accuracy achieved during any previous satellite mission. This investigation considers the factors which limit the orbit accuracy for the Topex mission. Particular error sources which are considered include the geopotential, the radiation pressure and the atmospheric drag model.

  13. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  14. Indirect orthodontic bonding - a modified technique for improved efficiency and precision

    PubMed Central

    Nojima, Lincoln Issamu; Araújo, Adriele Silveira; Alves, Matheus

    2015-01-01

    INTRODUCTION: The indirect bonding technique optimizes fixed appliance installation at the orthodontic office, ensuring precise bracket positioning, among other advantages. In this laboratory clinical phase, material and methods employed in creating the transfer tray are decisive to accuracy. OBJECTIVE: This article describes a simple, efficient and reproducible indirect bonding technique that allows the procedure to be carried out successfully. Variables influencing the orthodontic bonding are analyzed and discussed in order to aid professionals wishing to adopt the indirect bonding technique routinely in their clinical practice. PMID:26154464

  15. Nickel solution prepared for precision electroforming

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Lightweight, precision optical reflectors are made by electroforming nickel onto masters. Steps for the plating bath preparation, process control testing, and bath composition adjustments are prescribed to avoid internal stresses and maintain dimensional accuracy of the electrodeposited metal.

  16. Progress on glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Weber, Peter; Westerhoff, Thomas

    2016-03-01

    The Semiconductor Industry is making continuous progress in shrinking feature size developing technologies and process to achieve < 10 nm feature size. The required Overlay specification for successful production is in the range one nanometer or even smaller. Consequently, materials designed into metrology systems of exposure or inspection tools need to fulfill ever tighter specification on the coefficient of thermal expansion (CTE). The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion, the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR®. This paper is focusing on the "Advanced Dilatometer" for determination of the CTE developed at SCHOTT in the recent years and introduced into production in Q1 2015. The achievement for improving the absolute CTE measurement accuracy and the reproducibility are described in detail. Those achievements are compared to the CTE measurement accuracy reported by the Physikalische Technische Bundesanstalt (PTB), the National Metrology Institute of Germany. The CTE homogeneity is of highest importance to achieve nanometer precision on larger scales. Additionally, the paper presents data on the short scale CTE homogeneity and its improvement in the last two years. The data presented in this paper will explain the capability of ZERODUR® to enable the extreme precision required for future generation of lithography equipment and processes.

  17. Reproducibility of the Structural Connectome Reconstruction across Diffusion Methods.

    PubMed

    Prčkovska, Vesna; Rodrigues, Paulo; Puigdellivol Sanchez, Ana; Ramos, Marc; Andorra, Magi; Martinez-Heras, Eloy; Falcon, Carles; Prats-Galino, Albert; Villoslada, Pablo

    2016-01-01

    Analysis of the structural connectomes can lead to powerful insights about the brain's organization and damage. However, the accuracy and reproducibility of constructing the structural connectome done with different acquisition and reconstruction techniques is not well defined. In this work, we evaluated the reproducibility of the structural connectome techniques by performing test-retest (same day) and longitudinal studies (after 1 month) as well as analyzing graph-based measures on the data acquired from 22 healthy volunteers (6 subjects were used for the longitudinal study). We compared connectivity matrices and tract reconstructions obtained with the most typical acquisition schemes used in clinical application: diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI), and diffusion spectrum imaging (DSI). We observed that all techniques showed high reproducibility in the test-retest analysis (correlation >.9). However, HARDI was the only technique with low variability (2%) in the longitudinal assessment (1-month interval). The intraclass coefficient analysis showed the highest reproducibility for the DTI connectome, however, with more sparse connections than HARDI and DSI. Qualitative (neuroanatomical) assessment of selected tracts confirmed the quantitative results showing that HARDI managed to detect most of the analyzed fiber groups and fanning fibers. In conclusion, we found that HARDI acquisition showed the most balanced trade-off between high reproducibility of the connectome, higher rate of path detection and of fanning fibers, and intermediate acquisition times (10-15 minutes), although at the cost of higher appearance of aberrant fibers. PMID:26464179

  18. Francis M. Pipkin Award Talk - Precision Measurement with Atom Interferometry

    NASA Astrophysics Data System (ADS)

    Müller, Holger

    2015-05-01

    Atom interferometers are relatives of Young's double-slit experiment that use matter waves. They leverage light-atom interactions to masure fundamental constants, test fundamental symmetries, sense weak fields such as gravity and the gravity gradient, search for elusive ``fifth forces,'' and potentially test properties of antimatter and detect gravitational waves. We will discuss large (multiphoton-) momentum transfer that can enhance sensitivity and accuracy of atom interferometers several thousand fold. We will discuss measuring the fine structure constant to sub-part per billion precision and how it tests the standard model of particle physics. Finally, there has been interest in light bosons as candidates for dark matter and dark energy; atom interferometers have favorable sensitivity in searching for those fields. As a first step, we present our experiment ruling out chameleon fields and a broad class of other theories that would reproduce the observed dark energy density.

  19. Reproducibility in density functional theory calculations of solids.

    PubMed

    Lejaeghere, Kurt; Bihlmayer, Gustav; Björkman, Torbjörn; Blaha, Peter; Blügel, Stefan; Blum, Volker; Caliste, Damien; Castelli, Ivano E; Clark, Stewart J; Dal Corso, Andrea; de Gironcoli, Stefano; Deutsch, Thierry; Dewhurst, John Kay; Di Marco, Igor; Draxl, Claudia; Dułak, Marcin; Eriksson, Olle; Flores-Livas, José A; Garrity, Kevin F; Genovese, Luigi; Giannozzi, Paolo; Giantomassi, Matteo; Goedecker, Stefan; Gonze, Xavier; Grånäs, Oscar; Gross, E K U; Gulans, Andris; Gygi, François; Hamann, D R; Hasnip, Phil J; Holzwarth, N A W; Iuşan, Diana; Jochym, Dominik B; Jollet, François; Jones, Daniel; Kresse, Georg; Koepernik, Klaus; Küçükbenli, Emine; Kvashnin, Yaroslav O; Locht, Inka L M; Lubeck, Sven; Marsman, Martijn; Marzari, Nicola; Nitzsche, Ulrike; Nordström, Lars; Ozaki, Taisuke; Paulatto, Lorenzo; Pickard, Chris J; Poelmans, Ward; Probert, Matt I J; Refson, Keith; Richter, Manuel; Rignanese, Gian-Marco; Saha, Santanu; Scheffler, Matthias; Schlipf, Martin; Schwarz, Karlheinz; Sharma, Sangeeta; Tavazza, Francesca; Thunström, Patrik; Tkatchenko, Alexandre; Torrent, Marc; Vanderbilt, David; van Setten, Michiel J; Van Speybroeck, Veronique; Wills, John M; Yates, Jonathan R; Zhang, Guo-Xu; Cottenier, Stefaan

    2016-03-25

    The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.

  20. Reproducibility in density functional theory calculations of solids.

    PubMed

    Lejaeghere, Kurt; Bihlmayer, Gustav; Björkman, Torbjörn; Blaha, Peter; Blügel, Stefan; Blum, Volker; Caliste, Damien; Castelli, Ivano E; Clark, Stewart J; Dal Corso, Andrea; de Gironcoli, Stefano; Deutsch, Thierry; Dewhurst, John Kay; Di Marco, Igor; Draxl, Claudia; Dułak, Marcin; Eriksson, Olle; Flores-Livas, José A; Garrity, Kevin F; Genovese, Luigi; Giannozzi, Paolo; Giantomassi, Matteo; Goedecker, Stefan; Gonze, Xavier; Grånäs, Oscar; Gross, E K U; Gulans, Andris; Gygi, François; Hamann, D R; Hasnip, Phil J; Holzwarth, N A W; Iuşan, Diana; Jochym, Dominik B; Jollet, François; Jones, Daniel; Kresse, Georg; Koepernik, Klaus; Küçükbenli, Emine; Kvashnin, Yaroslav O; Locht, Inka L M; Lubeck, Sven; Marsman, Martijn; Marzari, Nicola; Nitzsche, Ulrike; Nordström, Lars; Ozaki, Taisuke; Paulatto, Lorenzo; Pickard, Chris J; Poelmans, Ward; Probert, Matt I J; Refson, Keith; Richter, Manuel; Rignanese, Gian-Marco; Saha, Santanu; Scheffler, Matthias; Schlipf, Martin; Schwarz, Karlheinz; Sharma, Sangeeta; Tavazza, Francesca; Thunström, Patrik; Tkatchenko, Alexandre; Torrent, Marc; Vanderbilt, David; van Setten, Michiel J; Van Speybroeck, Veronique; Wills, John M; Yates, Jonathan R; Zhang, Guo-Xu; Cottenier, Stefaan

    2016-03-25

    The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements. PMID:27013736

  1. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  2. Precision grid and hand motion for accurate needle insertion in brachytherapy

    SciTech Connect

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.; McLaughlin, Patrick W.; Shih, Albert J.

    2011-08-15

    Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on the measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2{sup 3} factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.

  3. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  4. Matter power spectrum and the challenge of percent accuracy

    NASA Astrophysics Data System (ADS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N-body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N-body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k<=1 h Mpc‑1 and to within three percent at k<=10 h Mpc‑1. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k<= 2 h Mpc‑1. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L=0.5 h‑1Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of Mp=109 h‑1Msolar is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  5. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  6. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  7. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  8. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  9. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  10. Global positioning system measurements for crustal deformation: precision and accuracy.

    PubMed

    Prescott, W H; Davis, J L; Svarc, J L

    1989-06-16

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million. PMID:17820661

  11. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  12. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  13. Meteorite Atmospheric Entry Reproduced in Plasmatron

    NASA Astrophysics Data System (ADS)

    Pittarello, L.; McKibbin, S.; Goderis, S.; Soens, B.; Bariselli, F.; Barros Dias, B. R.; Zavalan, F. L.; Magin, T.; Claeys, Ph.

    2016-08-01

    Plasmatron facility allows experimental conditions that reproduce atmospheric entry of meteorites. Tests on basalt, as meteorite analogue, have been performed. Preliminary results have highlighted melting and evaporation effects.

  14. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  15. A High Precision Method for Quantitative Measurements of Reactive Oxygen Species in Frozen Biopsies

    PubMed Central

    Lindgren, Mikael; Gustafsson, Håkan

    2014-01-01

    Objective An electron paramagnetic resonance (EPR) technique using the spin probe cyclic hydroxylamine 1-hydroxy-3-methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine (CMH) was introduced as a versatile method for high precision quantification of reactive oxygen species, including the superoxide radical in frozen biological samples such as cell suspensions, blood or biopsies. Materials and Methods Loss of measurement precision and accuracy due to variations in sample size and shape were minimized by assembling the sample in a well-defined volume. Measurement was carried out at low temperature (150 K) using a nitrogen flow Dewar. The signal intensity was measured from the EPR 1st derivative amplitude, and related to a sample, 3-carboxy-proxyl (CP•) with known spin concentration. Results The absolute spin concentration could be quantified with a precision and accuracy better than ±10 µM (k = 1). The spin concentration of samples stored at −80°C could be reproduced after 6 months of storage well within the same error estimate. Conclusion The absolute spin concentration in wet biological samples such as biopsies, water solutions and cell cultures could be quantified with higher precision and accuracy than normally achievable using common techniques such as flat cells, tissue cells and various capillary tubes. In addition; biological samples could be collected and stored for future incubation with spin probe, and also further stored up to at least six months before EPR analysis, without loss of signal intensity. This opens for the possibility to store and transport incubated biological samples with known accuracy of the spin concentration over time. PMID:24603936

  16. Accuracy and Reliability of a New Tennis Ball Machine

    PubMed Central

    Brechbuhl, Cyril; Millet, Grégoire; Schmitt, Laurent

    2016-01-01

    The aim was to evaluate the reliability of a newly-developed ball machine named 'Hightof', on the field and to assess its accuracy. The experiment was conducted in the collaboration of the 'Hawk-Eye' technology. The accuracy and reliability of this ball machine were assessed during an incremental test, with 1 min of exercise and 30 sec of recovery, where the frequency of the balls increased from 10 to 30 balls·min-1. The initial frequency was 10 and increased by 2 until 22, then by 1 until 30 balls·min-1. The reference points for the impact were 8.39m from the net and 2.70m from lateral line for the right side and 2.83m for the left side. The precision of the machine was similar on the right and left sides (0.63 ± 0.39 vs 0.63 ± 0.34 m). The distances to the reference point were 0.52 ± 0.42, 0.26 ± 0.19, 0.52 ± 0.37, 0.28 ± 0.19 m for the Y-right, X-right, Y-left and X-left impacts. The precision was constant and did not increase with the intensity. (e.g ball frequency). The ball velocity was 86.3 ± 1.5 and 86.5 ± 1.3 km·h-1 for the right and the left side, respectively. The coefficient of variation for the velocity ranged between 1 and 2% in all stages (ball velocity ranging from 10 to 30 balls·min-1). Conclusion: both the accuracy and the reliability of this new ball machine appear satisfying enough for field testing and training. Key points The reliability and accuracy of a new ball machine named 'Hightof' were assessed. The impact point was reproducible and similar on the right and left sides (±0.63 m). The precision was constant and did not increase with the intensity (e.g ball frequency). The coefficient of variation of the ball velocity ranged between 1 and 2% in all stages (ball velocity ranging from 10 to 30 balls·min-1). PMID:27274663

  17. Precise Fabrication of Electromagnetic-Levitation Coils

    NASA Technical Reports Server (NTRS)

    Ethridge, E.; Curreri, P.; Theiss, J.; Abbaschian, G.

    1985-01-01

    Winding copper tubing on jig ensures reproducible performance. Sequence of steps insures consistent fabrication of levitation-and-melting coils. New method enables technician to produce eight coils per day, 95 percent of them acceptable. Method employs precise step-by-step procedure on specially designed wrapping and winding jig.

  18. Explorations in statistics: statistical facets of reproducibility.

    PubMed

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.

  19. Precision injection molding of freeform optics

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong

    2016-08-01

    Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.

  20. Is the Determination of Specific IgE against Components Using ISAC 112 a Reproducible Technique?

    PubMed Central

    Martínez-Aranguren, Rubén; Lizaso, María T.; Goikoetxea, María J.; García, Blanca E.; Cabrera-Freitag, Paula; Trellez, Oswaldo; Sanz, María L.

    2014-01-01

    Background The ImmunoCAP ISAC 112 is a fluoro-immunoassay that allows detection of specific IgE to 112 molecular components from 51 allergenic sources. We studied the reliability of this technique intra- and inter- assay, as well as inter-batch- and inter-laboratory-assay. Methods Twenty samples were studied, nineteen sera from polysensitized allergic patients, and the technique calibrator provided by the manufacturer (CTR02). We measured the sIgE from CTR02 and three patients' sera ten times in the same and in different assays. Furthermore, all samples were tested in two laboratories and with two batches of ISAC kit. To evaluate the accuracy of ISAC 112, we contrasted the determinations of CTR02 calibrator with their expected values by T Student test. To analyse the precision, we calculated the coefficient of variation (CV) of the 15 allergens that generate the calibration curve, and to analyse the repeatability and the reproducibility, we calculated the intraclass coefficient correlation (ICC) to each allergen. Results The results obtained for CTR02 were similar to those expected in 7 of 15 allergens that generate the calibration curve, whereas in 8 allergens the results showed significant differences. The mean CV obtained in the CTR02 determinations was of 9.4%, and the variability of sera from patients was of 22.9%. The agreement in the intra- and inter-assay analysis was very good to 94 allergens and good to one. In the inter-batch analyse, we obtained a very good agreement to 82 allergens, good to 14, moderate to 5 allergens, poor to one, and bad to 1 allergen. In the inter-laboratory analyse, we obtained a very good agreement to 73 allergens, good to 22, moderate to 6 and poor to two allergens. Conclusion The allergen microarray immunoassay, ISAC 112, is a repeatable and reproducible in vitro diagnostic tool for determination of sIgE beyond the own laboratory. PMID:24516646

  1. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  2. Fly wing vein patterns have spatial reproducibility of a single cell

    PubMed Central

    Abouchar, Laurent; Petkova, Mariela D.; Steinhardt, Cynthia R.; Gregor, Thomas

    2014-01-01

    Developmental processes in multicellular organisms occur in fluctuating environments and are prone to noise, yet they produce complex patterns with astonishing reproducibility. We measure the left–right and inter-individual precision of bilaterally symmetric fly wings across the natural range of genetic and environmental conditions and find that wing vein patterns are specified with identical spatial precision and are reproducible to within a single-cell width. The early fly embryo operates at a similar degree of reproducibility, suggesting that the overall spatial precision of morphogenesis in Drosophila performs at the single-cell level. Could development be operating at the physical limit of what a biological system can achieve? PMID:24942847

  3. High-accuracy EUV reflectometer

    NASA Astrophysics Data System (ADS)

    Hinze, U.; Fokoua, M.; Chichkov, B.

    2007-03-01

    Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.

  4. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  5. Reproducible research in vadose zone sciences

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A significant portion of present-day soil and Earth science research is computational, involving complex data analysis pipelines, advanced mathematical and statistical models, and sophisticated computer codes. Opportunities for scientific progress are greatly diminished if reproducing and building o...

  6. Thou Shalt Be Reproducible! A Technology Perspective

    PubMed Central

    Mair, Patrick

    2016-01-01

    This article elaborates on reproducibility in psychology from a technological viewpoint. Modern open source computational environments are shown and explained that foster reproducibility throughout the whole research life cycle, and to which emerging psychology researchers should be sensitized, are shown and explained. First, data archiving platforms that make datasets publicly available are presented. Second, R is advocated as the data-analytic lingua franca in psychology for achieving reproducible statistical analysis. Third, dynamic report generation environments for writing reproducible manuscripts that integrate text, data analysis, and statistical outputs such as figures and tables in a single document are described. Supplementary materials are provided in order to get the reader started with these technologies. PMID:27471486

  7. Reproducibility and uncertainty of wastewater turbidity measurements.

    PubMed

    Joannis, C; Ruban, G; Gromaire, M-C; Chebbo, G; Bertrand-Krajewski, J-L; Joannis, C; Ruban, G

    2008-01-01

    Turbidity monitoring is a valuable tool for operating sewer systems, but it is often considered as a somewhat tricky parameter for assessing water quality, because measured values depend on the model of sensor, and even on the operator. This paper details the main components of the uncertainty in turbidity measurements with a special focus on reproducibility, and provides guidelines for improving the reproducibility of measurements in wastewater relying on proper calibration procedures. Calibration appears to be the main source of uncertainties, and proper procedures must account for uncertainties in standard solutions as well as non linearity of the calibration curve. With such procedures, uncertainty and reproducibility of field measurement can be kept lower than 5% or 25 FAU. On the other hand, reproducibility has no meaning if different measuring principles (attenuation vs. nephelometry) or very different wavelengths are used.

  8. Thou Shalt Be Reproducible! A Technology Perspective.

    PubMed

    Mair, Patrick

    2016-01-01

    This article elaborates on reproducibility in psychology from a technological viewpoint. Modern open source computational environments are shown and explained that foster reproducibility throughout the whole research life cycle, and to which emerging psychology researchers should be sensitized, are shown and explained. First, data archiving platforms that make datasets publicly available are presented. Second, R is advocated as the data-analytic lingua franca in psychology for achieving reproducible statistical analysis. Third, dynamic report generation environments for writing reproducible manuscripts that integrate text, data analysis, and statistical outputs such as figures and tables in a single document are described. Supplementary materials are provided in order to get the reader started with these technologies. PMID:27471486

  9. Reproducibility of UAV-based photogrammetric surface models

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Smith, Mike; Cammeraat, Erik; Keesstra, Saskia

    2016-04-01

    Soil erosion, rapid geomorphological change and vegetation degradation are major threats to the human and natural environment in many regions. Unmanned Aerial Vehicles (UAVs) and Structure-from-Motion (SfM) photogrammetry are invaluable tools for the collection of highly detailed aerial imagery and subsequent low cost production of 3D landscapes for an assessment of landscape change. Despite the widespread use of UAVs for image acquisition in monitoring applications, the reproducibility of UAV data products has not been explored in detail. This paper investigates this reproducibility by comparing the surface models and orthophotos derived from different UAV flights that vary in flight direction and altitude. The study area is located near Lorca, Murcia, SE Spain, which is a semi-arid medium-relief locale. The area is comprised of terraced agricultural fields that have been abandoned for about 40 years and have suffered subsequent damage through piping and gully erosion. In this work we focused upon variation in cell size, vertical and horizontal accuracy, and horizontal positioning of recognizable landscape features. The results suggest that flight altitude has a significant impact on reconstructed point density and related cell size, whilst flight direction affects the spatial distribution of vertical accuracy. The horizontal positioning of landscape features is relatively consistent between the different flights. We conclude that UAV data products are suitable for monitoring campaigns for land cover purposes or geomorphological mapping, but special care is required when used for monitoring changes in elevation.

  10. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  11. Transparency and Reproducibility of Observational Cohort Studies Using Large Healthcare Databases.

    PubMed

    Wang, S V; Verpillat, P; Rassen, J A; Patrick, A; Garry, E M; Bartels, D B

    2016-03-01

    The scientific community and decision-makers are increasingly concerned about transparency and reproducibility of epidemiologic studies using longitudinal healthcare databases. We explored the extent to which published pharmacoepidemiologic studies using commercially available databases could be reproduced by other investigators. We identified a nonsystematic sample of 38 descriptive or comparative safety/effectiveness cohort studies. Seven studies were excluded from reproduction, five because of violation of fundamental design principles, and two because of grossly inadequate reporting. In the remaining studies, >1,000 patient characteristics and measures of association were reproduced with a high degree of accuracy (median differences between original and reproduction <2% and <0.1). An essential component of transparent and reproducible research with healthcare databases is more complete reporting of study implementation. Once reproducibility is achieved, the conversation can be elevated to assess whether suboptimal design choices led to avoidable bias and whether findings are replicable in other data sources. PMID:26690726

  12. Transparency and Reproducibility of Observational Cohort Studies Using Large Healthcare Databases.

    PubMed

    Wang, S V; Verpillat, P; Rassen, J A; Patrick, A; Garry, E M; Bartels, D B

    2016-03-01

    The scientific community and decision-makers are increasingly concerned about transparency and reproducibility of epidemiologic studies using longitudinal healthcare databases. We explored the extent to which published pharmacoepidemiologic studies using commercially available databases could be reproduced by other investigators. We identified a nonsystematic sample of 38 descriptive or comparative safety/effectiveness cohort studies. Seven studies were excluded from reproduction, five because of violation of fundamental design principles, and two because of grossly inadequate reporting. In the remaining studies, >1,000 patient characteristics and measures of association were reproduced with a high degree of accuracy (median differences between original and reproduction <2% and <0.1). An essential component of transparent and reproducible research with healthcare databases is more complete reporting of study implementation. Once reproducibility is achieved, the conversation can be elevated to assess whether suboptimal design choices led to avoidable bias and whether findings are replicable in other data sources.

  13. The Challenge of Reproducibility and Accuracy in Nutrition Research: Resources and Pitfalls.

    PubMed

    Sorkin, Barbara C; Kuszak, Adam J; Williamson, John S; Hopp, D Craig; Betz, Joseph M

    2016-03-01

    Inconsistent and contradictory results from nutrition studies conducted by different investigators continue to emerge, in part because of the inherent variability of natural products, as well as the unknown and therefore uncontrolled variables in study populations and experimental designs. Given these challenges inherent in nutrition research, it is critical for the progress of the field that researchers strive to minimize variability within studies and enhance comparability between studies by optimizing the characterization, control, and reporting of products, reagents, and model systems used, as well as the rigor and reporting of experimental designs, protocols, and data analysis. Here we describe some recent developments relevant to research on plant-derived products used in nutrition research, highlight some resources for optimizing the characterization and reporting of research using these products, and describe some of the pitfalls that may be avoided by adherence to these recommendations.

  14. Assessing the reproducibility of discriminant function analyses.

    PubMed

    Andrew, Rose L; Albert, Arianne Y K; Renaut, Sebastien; Rennison, Diana J; Bock, Dan G; Vines, Tim

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  15. Assessing the reproducibility of discriminant function analyses

    PubMed Central

    Andrew, Rose L.; Albert, Arianne Y.K.; Renaut, Sebastien; Rennison, Diana J.; Bock, Dan G.

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  16. Precisely! A Writing Exercise for Science and Engineering Classes

    ERIC Educational Resources Information Center

    Reynolds, Julie; Vogel, Steven

    2007-01-01

    While formats and conventions of scientific and technical writing vary from field to field, the transcendent requirement is precision, so that the work can be understood and, if necessary, reproduced. Science teachers undoubtedly tell students about the importance of precision in collecting data and analyzing results; what is less commonly…

  17. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  18. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  19. Relevance relations for the concept of reproducibility

    PubMed Central

    Atmanspacher, H.; Bezzola Lambert, L.; Folkers, G.; Schubiger, P. A.

    2014-01-01

    The concept of reproducibility is widely considered a cornerstone of scientific methodology. However, recent problems with the reproducibility of empirical results in large-scale systems and in biomedical research have cast doubts on its universal and rigid applicability beyond the so-called basic sciences. Reproducibility is a particularly difficult issue in interdisciplinary work where the results to be reproduced typically refer to different levels of description of the system considered. In such cases, it is mandatory to distinguish between more and less relevant features, attributes or observables of the system, depending on the level at which they are described. For this reason, we propose a scheme for a general ‘relation of relevance’ between the level of complexity at which a system is considered and the granularity of its description. This relation implies relevance criteria for particular selected aspects of a system and its description, which can be operationally implemented by an interlevel relation called ‘contextual emergence’. It yields a formally sound and empirically applicable procedure to translate between descriptive levels and thus construct level-specific criteria for reproducibility in an overall consistent fashion. Relevance relations merged with contextual emergence challenge the old idea of one fundamental ontology from which everything else derives. At the same time, our proposal is specific enough to resist the backlash into a relativist patchwork of unconnected model fragments. PMID:24554574

  20. Reproducibility responsibilities in the HPC arena

    SciTech Connect

    Fahey, Mark R; McLay, Robert

    2014-01-01

    Expecting bit-for-bit reproducibility in the HPC arena is not feasible because of the ever changing hardware and software. No user s application is an island; it lives in an HPC eco-system that changes over time. Old hardware stops working and even old software won t run on new hardware. Further, software libraries change over time either by changing the internals or even interfaces. So bit-for-bit reproducibility should not be expected. Rather a reasonable expectation is that results are reproducible within error bounds; or that the answers are close (which is its own debate.) To expect a researcher to reproduce their own results or the results of others within some error bounds, there must be enough information to recreate all the details of the experiment. This requires complete documentation of all phases of the researcher s workflow; from code to versioning to programming and runtime environments to publishing of data. This argument is the core statement of the Yale 2009 Declaration on Reproducible Research [1]. Although the HPC ecosystem is often outside the researchers control, the application code could be built almost identically and there is a chance for very similar results with just only round-off error differences. To achieve complete documentation at every step, the researcher, the computing center, and the funding agencies all have a role. In this thesis, the role of the researcher is expanded upon as compared to the Yale report and the role of the computing centers is described.

  1. Accurate measurements of dynamics and reproducibility in small genetic networks

    PubMed Central

    Dubuis, Julien O; Samanta, Reba; Gregor, Thomas

    2013-01-01

    Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845

  2. Reproducible measurements of MPI performance characteristics.

    SciTech Connect

    Gropp, W.; Lusk, E.

    1999-06-25

    In this paper we describe the difficulties inherent in making accurate, reproducible measurements of message-passing performance. We describe some of the mistakes often made in attempting such measurements and the consequences of such mistakes. We describe mpptest, a suite of performance measurement programs developed at Argonne National Laboratory, that attempts to avoid such mistakes and obtain reproducible measures of MPI performance that can be useful to both MPI implementers and MPI application writers. We include a number of illustrative examples of its use.

  3. The Economics of Reproducibility in Preclinical Research

    PubMed Central

    Freedman, Leonard P.; Cockburn, Iain M.; Simcoe, Timothy S.

    2015-01-01

    Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures. PMID:26057340

  4. An EPID based method for efficient and precise asymmetric jaw alignment quality assurance

    SciTech Connect

    Clews, Luke; Greer, Peter B.

    2009-12-15

    Purpose: The aim of this work was to investigate the use of amorphous silicon electronic portal imaging devices (EPIDs) for regular quality assurance of linear accelerator asymmetric jaw junctioning. Methods: The method uses the beam central axis position on the EPID measured to subpixel accuracy found from two EPID images with 180 degree sign opposing collimator angles. Individual zero jaw position (''half-beam blocked'') images are then acquired and the jaw position precisely determined for each using penumbra interpolation. The accuracy of determining jaw position with the EPID method was measured by translating a block (simulating a jaw) by known distances, using a translation stage, and then measuring each translation distance with the EPID. To establish the utility of EPID based junction dose measurements, radiographic film measurements of junction dose maxima/minima as a function of jaw gap/overlap were made and compared to EPID measurements. Using the method, the long-term stability of zero jaw positioning was assessed for four linear accelerators over a 1-1.5 yr time period. The stability at nonzero gantry angles was assessed over a shorter time period. Results: The accuracy of determining jaw translations with the method was within 0.14 mm found using the translation stage [standard deviation (SD) of 0.037 mm]. The junction doses measured with the EPID were different from film due to the nonwater equivalent EPID scattering properties and hence different penumbra profile. The doses were approximately linear with gap or overlap, and a correction factor was derived to convert EPID measured junction dose to film measured equivalent. Over a 1 yr period, the zero jaw positions at gantry zero position were highly reproducible with an average SD of 0.07 mm for the 16 collimator jaws examined. However, the average jaw positions ranged from -0.7 to 0.9 mm relative to central axis for the different jaws. The zero jaw position was also reproducible at gantry 90

  5. Regional Reproducibility of BOLD Calibration Parameter M, OEF and Resting-State CMRO2 Measurements with QUO2 MRI.

    PubMed

    Lajoie, Isabelle; Tancredi, Felipe B; Hoge, Richard D

    2016-01-01

    The current generation of calibrated MRI methods goes beyond simple localization of task-related responses to allow the mapping of resting-state cerebral metabolic rate of oxygen (CMRO2) in micromolar units and estimation of oxygen extraction fraction (OEF). Prior to the adoption of such techniques in neuroscience research applications, knowledge about the precision and accuracy of absolute estimates of CMRO2 and OEF is crucial and remains unexplored to this day. In this study, we addressed the question of methodological precision by assessing the regional inter-subject variance and intra-subject reproducibility of the BOLD calibration parameter M, OEF, O2 delivery and absolute CMRO2 estimates derived from a state-of-the-art calibrated BOLD technique, the QUantitative O2 (QUO2) approach. We acquired simultaneous measurements of CBF and R2* at rest and during periods of hypercapnia (HC) and hyperoxia (HO) on two separate scan sessions within 24 hours using a clinical 3 T MRI scanner. Maps of M, OEF, oxygen delivery and CMRO2, were estimated from the measured end-tidal O2, CBF0, CBFHC/HO and R2*HC/HO. Variability was assessed by computing the between-subject coefficients of variation (bwCV) and within-subject CV (wsCV) in seven ROIs. All tests GM-averaged values of CBF0, M, OEF, O2 delivery and CMRO2 were: 49.5 ± 6.4 mL/100 g/min, 4.69 ± 0.91%, 0.37 ± 0.06, 377 ± 51 μmol/100 g/min and 143 ± 34 μmol/100 g/min respectively. The variability of parameter estimates was found to be the lowest when averaged throughout all GM, with general trends toward higher CVs when averaged over smaller regions. Among the MRI measurements, the most reproducible across scans was R2*0 (wsCVGM = 0.33%) along with CBF0 (wsCVGM = 3.88%) and R2*HC (wsCVGM = 6.7%). CBFHC and R2*HO were found to have a higher intra-subject variability (wsCVGM = 22.4% and wsCVGM = 16% respectively), which is likely due to propagation of random measurement errors, especially for CBFHC due to the low

  6. Regional Reproducibility of BOLD Calibration Parameter M, OEF and Resting-State CMRO2 Measurements with QUO2 MRI.

    PubMed

    Lajoie, Isabelle; Tancredi, Felipe B; Hoge, Richard D

    2016-01-01

    The current generation of calibrated MRI methods goes beyond simple localization of task-related responses to allow the mapping of resting-state cerebral metabolic rate of oxygen (CMRO2) in micromolar units and estimation of oxygen extraction fraction (OEF). Prior to the adoption of such techniques in neuroscience research applications, knowledge about the precision and accuracy of absolute estimates of CMRO2 and OEF is crucial and remains unexplored to this day. In this study, we addressed the question of methodological precision by assessing the regional inter-subject variance and intra-subject reproducibility of the BOLD calibration parameter M, OEF, O2 delivery and absolute CMRO2 estimates derived from a state-of-the-art calibrated BOLD technique, the QUantitative O2 (QUO2) approach. We acquired simultaneous measurements of CBF and R2* at rest and during periods of hypercapnia (HC) and hyperoxia (HO) on two separate scan sessions within 24 hours using a clinical 3 T MRI scanner. Maps of M, OEF, oxygen delivery and CMRO2, were estimated from the measured end-tidal O2, CBF0, CBFHC/HO and R2*HC/HO. Variability was assessed by computing the between-subject coefficients of variation (bwCV) and within-subject CV (wsCV) in seven ROIs. All tests GM-averaged values of CBF0, M, OEF, O2 delivery and CMRO2 were: 49.5 ± 6.4 mL/100 g/min, 4.69 ± 0.91%, 0.37 ± 0.06, 377 ± 51 μmol/100 g/min and 143 ± 34 μmol/100 g/min respectively. The variability of parameter estimates was found to be the lowest when averaged throughout all GM, with general trends toward higher CVs when averaged over smaller regions. Among the MRI measurements, the most reproducible across scans was R2*0 (wsCVGM = 0.33%) along with CBF0 (wsCVGM = 3.88%) and R2*HC (wsCVGM = 6.7%). CBFHC and R2*HO were found to have a higher intra-subject variability (wsCVGM = 22.4% and wsCVGM = 16% respectively), which is likely due to propagation of random measurement errors, especially for CBFHC due to the low

  7. Regional Reproducibility of BOLD Calibration Parameter M, OEF and Resting-State CMRO2 Measurements with QUO2 MRI

    PubMed Central

    Lajoie, Isabelle; Tancredi, Felipe B.; Hoge, Richard D.

    2016-01-01

    The current generation of calibrated MRI methods goes beyond simple localization of task-related responses to allow the mapping of resting-state cerebral metabolic rate of oxygen (CMRO2) in micromolar units and estimation of oxygen extraction fraction (OEF). Prior to the adoption of such techniques in neuroscience research applications, knowledge about the precision and accuracy of absolute estimates of CMRO2 and OEF is crucial and remains unexplored to this day. In this study, we addressed the question of methodological precision by assessing the regional inter-subject variance and intra-subject reproducibility of the BOLD calibration parameter M, OEF, O2 delivery and absolute CMRO2 estimates derived from a state-of-the-art calibrated BOLD technique, the QUantitative O2 (QUO2) approach. We acquired simultaneous measurements of CBF and R2* at rest and during periods of hypercapnia (HC) and hyperoxia (HO) on two separate scan sessions within 24 hours using a clinical 3 T MRI scanner. Maps of M, OEF, oxygen delivery and CMRO2, were estimated from the measured end-tidal O2, CBF0, CBFHC/HO and R2*HC/HO. Variability was assessed by computing the between-subject coefficients of variation (bwCV) and within-subject CV (wsCV) in seven ROIs. All tests GM-averaged values of CBF0, M, OEF, O2 delivery and CMRO2 were: 49.5 ± 6.4 mL/100 g/min, 4.69 ± 0.91%, 0.37 ± 0.06, 377 ± 51 μmol/100 g/min and 143 ± 34 μmol/100 g/min respectively. The variability of parameter estimates was found to be the lowest when averaged throughout all GM, with general trends toward higher CVs when averaged over smaller regions. Among the MRI measurements, the most reproducible across scans was R2*0 (wsCVGM = 0.33%) along with CBF0 (wsCVGM = 3.88%) and R2*HC (wsCVGM = 6.7%). CBFHC and R2*HO were found to have a higher intra-subject variability (wsCVGM = 22.4% and wsCVGM = 16% respectively), which is likely due to propagation of random measurement errors, especially for CBFHC due to the low

  8. Reproducibility, Controllability, and Optimization of Lenr Experiments

    NASA Astrophysics Data System (ADS)

    Nagel, David J.

    2006-02-01

    Low-energy nuclear reaction (LENR) measurements are significantly and increasingly reproducible. Practical control of the production of energy or materials by LENR has yet to be demonstrated. Minimization of costly inputs and maximization of desired outputs of LENR remain for future developments.

  9. Natural Disasters: Earth Science Readings. Reproducibles.

    ERIC Educational Resources Information Center

    Lobb, Nancy

    Natural Disasters is a reproducible teacher book that explains what scientists believe to be the causes of a variety of natural disasters and suggests steps that teachers and students can take to be better prepared in the event of a natural disaster. It contains both student and teacher sections. Teacher sections include vocabulary, an answer key,…

  10. Making Early Modern Medicine: Reproducing Swedish Bitters.

    PubMed

    Ahnfelt, Nils-Otto; Fors, Hjalmar

    2016-05-01

    Historians of science and medicine have rarely applied themselves to reproducing the experiments and practices of medicine and pharmacy. This paper delineates our efforts to reproduce "Swedish Bitters," an early modern composite medicine in wide European use from the 1730s to the present. In its original formulation, it was made from seven medicinal simples: aloe, rhubarb, saffron, myrrh, gentian, zedoary and agarikon. These were mixed in alcohol together with some theriac, a composite medicine of classical origin. The paper delineates the compositional history of Swedish Bitters and the medical rationale underlying its composition. It also describes how we go about to reproduce the medicine in a laboratory using early modern pharmaceutical methods, and analyse it using contemporary methods of pharmaceutical chemistry. Our aim is twofold: first, to show how reproducing medicines may provide a path towards a deeper understanding of the role of sensual and practical knowledge in the wider context of early modern medical culture; and second, how it may yield interesting results from the point of view of contemporary pharmaceutical science.

  11. Evaluation of the reproducibility of lung motion probability distribution function (PDF) using dynamic MRI

    NASA Astrophysics Data System (ADS)

    Cai, Jing; Read, Paul W.; Altes, Talissa A.; Molloy, Janelle A.; Brookeman, James R.; Sheng, Ke

    2007-01-01

    Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.

  12. ITK: enabling reproducible research and open science

    PubMed Central

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46. PMID:24600387

  13. ITK: enabling reproducible research and open science.

    PubMed

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46.

  14. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  15. The Vienna LTE simulators - Enabling reproducibility in wireless communications research

    NASA Astrophysics Data System (ADS)

    Mehlführer, Christian; Colom Colom Ikuno, Josep; Šimko, Michal; Schwarz, Stefan; Wrulich, Martin; Rupp, Markus

    2011-12-01

    In this article, we introduce MATLAB-based link and system level simulation environments for UMTS Long-Term Evolution (LTE). The source codes of both simulators are available under an academic non-commercial use license, allowing researchers full access to standard-compliant simulation environments. Owing to the open source availability, the simulators enable reproducible research in wireless communications and comparison of novel algorithms. In this study, we explain how link and system level simulations are connected and show how the link level simulator serves as a reference to design the system level simulator. We compare the accuracy of the PHY modeling at system level by means of simulations performed both with bit-accurate link level simulations and PHY-model-based system level simulations. We highlight some of the currently most interesting research questions for LTE, and explain by some research examples how our simulators can be applied.

  16. Reproducing kernel hilbert space based single infrared image super resolution

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang; Deng, Liangjian; Shen, Wei; Xi, Ning; Zhou, Zhanxin; Song, Bo; Yang, Yongliang; Cheng, Yu; Dong, Lixin

    2016-07-01

    The spatial resolution of Infrared (IR) images is limited by lens optical diffraction, sensor array pitch size and pixel dimension. In this work, a robust model is proposed to reconstruct high resolution infrared image via a single low resolution sampling, where the image features are discussed and classified as reflective, cooled emissive and uncooled emissive based on infrared irradiation source. A spline based reproducing kernel hilbert space and approximative heaviside function are deployed to model smooth part and edge component of image respectively. By adjusting the parameters of heaviside function, the proposed model can enhance distinct part of images. The experimental results show that the model is applicable on both reflective and emissive low resolution infrared images to improve thermal contrast. The overall outcome produces a high resolution IR image, which makes IR camera better measurement accuracy and observes more details at long distance.

  17. A Physical Activity Questionnaire: Reproducibility and Validity

    PubMed Central

    Barbosa, Nicolas; Sanchez, Carlos E.; Vera, Jose A.; Perez, Wilson; Thalabard, Jean-Christophe; Rieu, Michel

    2007-01-01

    This study evaluates the Quantification de L’Activite Physique en Altitude chez les Enfants (QAPACE) supervised self-administered questionnaire reproducibility and validity on the estimation of the mean daily energy expenditure (DEE) on Bogotá’s schoolchildren. The comprehension was assessed on 324 students, whereas the reproducibility was studied on a different random sample of 162 who were exposed twice to it. Reproducibility was assessed using both the Bland-Altman plot and the intra-class correlation coefficient (ICC). The validity was studied in a sample of 18 girls and 18 boys randomly selected, which completed the test - re-test study. The DEE derived from the questionnaire was compared with the laboratory measurement results of the peak oxygen uptake (Peak VO2) from ergo-spirometry and Leger Test. The reproducibility ICC was 0.96 (95% C.I. 0.95-0.97); by age categories 8-10, 0.94 (0.89-0. 97); 11-13, 0.98 (0.96- 0.99); 14-16, 0.95 (0.91-0.98). The ICC between mean TEE as estimated by the questionnaire and the direct and indirect Peak VO2 was 0.76 (0.66) (p<0.01); by age categories, 8-10, 11-13, and 14-16 were 0.89 (0.87), 0.76 (0.78) and 0.88 (0.80) respectively. The QAPACE questionnaire is reproducible and valid for estimating PA and showed a high correlation with the Peak VO2 uptake. Key pointsThe presence of a supervisor, the limited size of the group with the possibility of answering to their questions could explain the high reproducibility for this questionnaire.No study in the literature had directly addressed the issue of estimating a yearly average PA including school and vacation period.A two step procedure, in the population of schoolchildren of Bogotá, gives confidence in the use of the QAPACE questionnaire in a large epidemiological survey in related populations. PMID:24149485

  18. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  19. Accuracy and Injection Force of the Gla-300 Injection Device Compared With Other Commercialized Disposable Insulin Pens

    PubMed Central

    Klonoff, David; Nayberg, Irina; Thonius, Marissa; See, Florian; Abdel-Tawab, Mona; Erbstein, Frank; Haak, Thomas

    2015-01-01

    Background: To deliver insulin glargine 300 U/mL (Gla-300), the widely used SoloSTAR® pen has been modified to allow for accurate and precise delivery of required insulin units in one-third of the volume compared with insulin glargine 100 U/mL, while improving usability. Here we compare the accuracy and injection force of 3 disposable insulin pens: Gla-300 SoloSTAR®, FlexPen®, and KwikPen™. Methods: For the accuracy assessment, 60 of each of the 3 tested devices were used for the delivery of 3 different doses (1 U, half-maximal dose, and maximal dose), which were measured gravimetrically. For the injection force assessment, 20 pens of each of the 3 types were tested twice at half-maximal and once at maximal dose, at an injection speed of 6 U/s. Results: All tested pens met the International Organization for Standardization (ISO) requirements for dosing accuracy, with Gla-300 SoloSTAR showing the lowest between-dose variation (greatest reproducibility) at all dose levels. Mean injection force was significantly lower for Gla-300 SoloSTAR than for the other 2 pens at both half maximal and maximal doses (P < .0271). Conclusion: All tested pens were accurate according to ISO criteria, and the Gla-300 SoloSTAR pen displayed the greatest reproducibility and lowest injection force of any of the 3 tested devices. PMID:26311720

  20. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  1. Reproducibility of electroretinograms recorded with DTL electrodes.

    PubMed

    Hébert, M; Lachapelle, P; Dumont, M

    The purpose of this study was to examine whether the use of the DTL fiber electrode yields stable and reproducible electroretinographic recordings. To do so, luminance response function, derived from dark-adapted electroretinograms, was obtained from both eyes of 10 normal subjects at two recording sessions spaced by 7-14 days. The data thus generated was used to calculate Naka-Rushton Vmax and k parameters and values obtained at the two recording sessions were compared. Our results showed that there was no significant difference in the values of Vmax and k calculated from the data generated at the two recording sessions. The above clearly demonstrate that the use of the DTL fiber electrode does not jeopardize, in any way, the stability and reproducibility of ERG responses.

  2. A Precise Lunar Photometric Function

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.

    1996-03-01

    The Clementine multispectral dataset will enable compositional mapping of the entire lunar surface at a resolution of ~100-200 m, but a highly accurate photometric normalization is needed to achieve challenging scientific objectives such as mapping petrographic or elemental compositions. The goal of this work is to normalize the Clementine data to an accuracy of 1% for the UVVIS images (0.415, 0.75, 0.9, 0.95, and 1.0 micrometers) and 2% for NIR images (1.1, 1.25, 1.5, 2.0, 2.6, and 2.78 micrometers), consistent with radiometric calibration goals. The data will be normalized to R30, the reflectance expected at an incidence angle (i) and phase angle (alpha) of 30 degrees and emission angle (e) of 0 degree, matching the photometric geometry of lunar samples measured at the reflectance laboratory (RELAB) at Brown University The focus here is on the precision of the normalization, not the putative physical significance of the photometric function parameters. The 2% precision achieved is significantly better than the ~10% precision of a previous normalization.

  3. Reproducibility of liquid oxygen impact test results

    NASA Technical Reports Server (NTRS)

    Gayle, J. B.

    1975-01-01

    Results for 12,000 impacts on a wide range of materials were studied to determine the reproducibility of the liquid oxygen impact test method. Standard deviations representing the overall variability of results were in close agreement with the expected values for a binomial process. This indicates that the major source of variability is due to the go - no go nature of the test method and that variations due to sampling and testing operations were not significant.

  4. A Framework for Reproducible Latent Fingerprint Enhancements

    PubMed Central

    Carasso, Alfred S.

    2014-01-01

    Photoshop processing1 of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028

  5. A Framework for Reproducible Latent Fingerprint Enhancements.

    PubMed

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.

  6. Tools and techniques for computational reproducibility.

    PubMed

    Piccolo, Stephen R; Frampton, Michael B

    2016-01-01

    When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches. PMID:27401684

  7. Tools and techniques for computational reproducibility.

    PubMed

    Piccolo, Stephen R; Frampton, Michael B

    2016-07-11

    When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches.

  8. A Framework for Reproducible Latent Fingerprint Enhancements.

    PubMed

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028

  9. Precision volume measuring system

    SciTech Connect

    Klevgard, P.A.

    1984-11-01

    An engineering study was undertaken to calibrate and certify a precision volume measurement system that uses the ideal gas law and precise pressure measurements (of low-pressure helium) to ratio a known to an unknown volume. The constant-temperature, computer-controlled system was tested for thermodynamic instabilities, for precision (0.01%), and for bias (0.01%). Ratio scaling was used to optimize the quartz crystal pressure transducer calibration.

  10. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  11. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  12. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  13. Reproducibility study of TLD-100 micro-cubes at radiotherapy dose level.

    PubMed

    da Rosa, L A; Regulla, D F; Fill, U A

    1999-03-01

    The precision of the thermoluminescent response of Harshaw micro-cube dosimeters (TLD-100), evaluated in both Harshaw thermoluminescent readers 5500 and 3500, for 1 Gy dose value, was investigated. The mean reproducibility for micro-cubes, pre-readout annealed at 100 degrees C for 15 min, evaluated with the manual planchet reader 3500, is 0.61% (1 standard deviation). When micro-cubes are evaluated with the automated hot-gas reader 5500, reproducibility values are undoubtedly worse, mean reproducibility for numerically stabilised dosimeters being equal to 3.27% (1 standard deviation). These results indicate that the reader model 5500, or, at least, the instrument used for the present measurements, is not adequate for micro-cube evaluation, if precise and accurate dosimetry is required. The difference in precision is apparently due to geometry inconsistencies in the orientation of the imperfect micro-cube faces during readout, requiring careful and manual reproducible arrangement of the selected micro-cube faces in contact with the manual reader planchet.

  14. System and method for high precision isotope ratio destructive analysis

    SciTech Connect

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  15. Precision Teaching: An Introduction.

    ERIC Educational Resources Information Center

    West, Richard P.; And Others

    1990-01-01

    Precision teaching is introduced as a method of helping students develop fluency or automaticity in the performance of academic skills. Precision teaching involves being aware of the relationship between teaching and learning, measuring student performance regularly and frequently, and analyzing the measurements to develop instructional and…

  16. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  17. Towards reproducible, scalable lateral molecular electronic devices

    SciTech Connect

    Durkan, Colm Zhang, Qian

    2014-08-25

    An approach to reproducibly fabricate molecular electronic devices is presented. Lateral nanometer-scale gaps with high yield are formed in Au/Pd nanowires by a combination of electromigration and Joule-heating-induced thermomechanical stress. The resulting nanogap devices are used to measure the electrical properties of small numbers of two different molecular species with different end-groups, namely 1,4-butane dithiol and 1,5-diamino-2-methylpentane. Fluctuations in the current reveal that in the case of the dithiol molecule devices, individual molecules conduct intermittently, with the fluctuations becoming more pronounced at larger biases.

  18. Nonlinear sequential laminates reproducing hollow sphere assemblages

    NASA Astrophysics Data System (ADS)

    Idiart, Martín I.

    2007-07-01

    A special class of nonlinear porous materials with isotropic 'sequentially laminated' microstructures is found to reproduce exactly the hydrostatic behavior of 'hollow sphere assemblages'. It is then argued that this result supports the conjecture that Gurson's approximate criterion for plastic porous materials, and its viscoplastic extension of Leblond et al. (1994), may actually yield rigorous upper bounds for the hydrostatic flow stress of porous materials containing an isotropic, but otherwise arbitrary, distribution of porosity. To cite this article: M.I. Idiart, C. R. Mecanique 335 (2007).

  19. Open and reproducible global land use classification

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel; Václavík, Tomáš; Pross, Benjamin

    2015-04-01

    Researchers led by the Helmholtz Centre for Environmental research (UFZ) developed a new world map of land use systems based on over 30 diverse indicators (http://geoportal.glues.geo.tu-dresden.de/stories/landsystemarchetypes.html) of land use intensity, climate and environmental and socioeconomic factors. They identified twelve land system archetypes (LSA) using a data-driven classification algorithm (self-organizing maps) to assess global impacts of land use on the environment, and found unexpected similarities across global regions. We present how the algorithm behind this analysis can be published as an executable web process using 52°North WPS4R (https://wiki.52north.org/bin/view/Geostatistics/WPS4R) within the GLUES project (http://modul-a.nachhaltiges-landmanagement.de/en/scientific-coordination-glues/). WPS4R is an open source collaboration platform for researchers, analysts and software developers to publish R scripts (http://www.r-project.org/) as a geo-enabled OGC Web Processing Service (WPS) process. The interoperable interface to call the geoprocess allows both reproducibility of the analysis and integration of user data without knowledge about web services or classification algorithms. The open platform allows everybody to replicate the analysis in their own environments. The LSA WPS process has several input parameters, which can be changed via a simple web interface. The input parameters are used to configure both the WPS environment and the LSA algorithm itself. The encapsulation as a web process allows integration of non-public datasets, while at the same time the publication requires a well-defined documentation of the analysis. We demonstrate this platform specifically to domain scientists and show how reproducibility and open source publication of analyses can be enhanced. We also discuss future extensions of the reproducible land use classification, such as the possibility for users to enter their own areas of interest to the system and

  20. Queer nuclear families? Reproducing and transgressing heteronormativity.

    PubMed

    Folgerø, Tor

    2008-01-01

    During the past decade the public debate on gay and lesbian adoptive rights has been extensive in the Norwegian media. The debate illustrates how women and men planning to raise children in homosexual family constellations challenge prevailing cultural norms and existing concepts of kinship and family. The article discusses how lesbian mothers and gay fathers understand and redefine their own family practices. An essential point in this article is the fundamental ambiguity in these families' accounts of themselves-how they simultaneously transgress and reproduce heteronormative assumptions about childhood, fatherhood, motherhood, family and kinship.

  1. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  2. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  3. Response to Comment on "Estimating the reproducibility of psychological science".

    PubMed

    Anderson, Christopher J; Bahník, Štěpán; Barnett-Cowan, Michael; Bosco, Frank A; Chandler, Jesse; Chartier, Christopher R; Cheung, Felix; Christopherson, Cody D; Cordes, Andreas; Cremata, Edward J; Della Penna, Nicolas; Estel, Vivien; Fedor, Anna; Fitneva, Stanka A; Frank, Michael C; Grange, James A; Hartshorne, Joshua K; Hasselman, Fred; Henninger, Felix; van der Hulst, Marije; Jonas, Kai J; Lai, Calvin K; Levitan, Carmel A; Miller, Jeremy K; Moore, Katherine S; Meixner, Johannes M; Munafò, Marcus R; Neijenhuijs, Koen I; Nilsonne, Gustav; Nosek, Brian A; Plessow, Franziska; Prenoveau, Jason M; Ricker, Ashley A; Schmidt, Kathleen; Spies, Jeffrey R; Stieger, Stefan; Strohminger, Nina; Sullivan, Gavin B; van Aert, Robbie C M; van Assen, Marcel A L M; Vanpaemel, Wolf; Vianello, Michelangelo; Voracek, Martin; Zuni, Kellylynn

    2016-03-01

    Gilbert et al. conclude that evidence from the Open Science Collaboration's Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.

  4. Response to Comment on "Estimating the reproducibility of psychological science".

    PubMed

    Anderson, Christopher J; Bahník, Štěpán; Barnett-Cowan, Michael; Bosco, Frank A; Chandler, Jesse; Chartier, Christopher R; Cheung, Felix; Christopherson, Cody D; Cordes, Andreas; Cremata, Edward J; Della Penna, Nicolas; Estel, Vivien; Fedor, Anna; Fitneva, Stanka A; Frank, Michael C; Grange, James A; Hartshorne, Joshua K; Hasselman, Fred; Henninger, Felix; van der Hulst, Marije; Jonas, Kai J; Lai, Calvin K; Levitan, Carmel A; Miller, Jeremy K; Moore, Katherine S; Meixner, Johannes M; Munafò, Marcus R; Neijenhuijs, Koen I; Nilsonne, Gustav; Nosek, Brian A; Plessow, Franziska; Prenoveau, Jason M; Ricker, Ashley A; Schmidt, Kathleen; Spies, Jeffrey R; Stieger, Stefan; Strohminger, Nina; Sullivan, Gavin B; van Aert, Robbie C M; van Assen, Marcel A L M; Vanpaemel, Wolf; Vianello, Michelangelo; Voracek, Martin; Zuni, Kellylynn

    2016-03-01

    Gilbert et al. conclude that evidence from the Open Science Collaboration's Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted. PMID:26941312

  5. A hydrogen gas-water equilibration method produces accurate and precise stable hydrogen isotope ratio measurements in nutrition studies.

    PubMed

    Wong, William W; Clarke, Lucinda L

    2012-11-01

    Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to hydrogen gas (H(2)) for mass spectrometric analysis are labor intensive, require special reagent, and involve memory effect and potential isotope fractionation. The objective of this study was to determine the accuracy and precision of a platinum catalyzed H(2)-water equilibration method for stable hydrogen isotope ratio measurements. Time to reach isotopic equilibrium, day-to-day and week-to-week reproducibility, accuracy, and precision of stable hydrogen isotope ratio measurements by the H(2)-water equilibration method were assessed using a Thermo DELTA V Advantage continuous-flow isotope ratio mass spectrometer. It took 3 h to reach isotopic equilibrium. The day-to-day and week-to-week measurements on water and urine samples with natural abundance and enriched levels of deuterium were highly reproducible. The method was accurate to within 2.8 (o)/oo and reproducible to within 4.0 (o)/oo based on analysis of international references. All the outcome variables, whether in urine samples collected in 10 doubly labeled water studies or plasma samples collected in 26 body water studies, did not differ from those obtained using the reference zinc reduction method. The method produced highly accurate estimation on ad libitum energy intakes, body composition, and water turnover rates. The method greatly reduces the analytical cost and could easily be adopted by laboratories equipped with a continuous-flow isotope ratio mass spectrometer.

  6. A Hydrogen Gas-Water Equilibration Method Produces Accurate and Precise Stable Hydrogen Isotope Ratio Measurements in Nutrition Studies12

    PubMed Central

    Wong, William W.; Clarke, Lucinda L.

    2012-01-01

    Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to hydrogen gas (H2) for mass spectrometric analysis are labor intensive, require special reagent, and involve memory effect and potential isotope fractionation. The objective of this study was to determine the accuracy and precision of a platinum catalyzed H2-water equilibration method for stable hydrogen isotope ratio measurements. Time to reach isotopic equilibrium, day-to-day and week-to-week reproducibility, accuracy, and precision of stable hydrogen isotope ratio measurements by the H2-water equilibration method were assessed using a Thermo DELTA V Advantage continuous-flow isotope ratio mass spectrometer. It took 3 h to reach isotopic equilibrium. The day-to-day and week-to-week measurements on water and urine samples with natural abundance and enriched levels of deuterium were highly reproducible. The method was accurate to within 2.8 o/oo and reproducible to within 4.0 o/oo based on analysis of international references. All the outcome variables, whether in urine samples collected in 10 doubly labeled water studies or plasma samples collected in 26 body water studies, did not differ from those obtained using the reference zinc reduction method. The method produced highly accurate estimation on ad libitum energy intakes, body composition, and water turnover rates. The method greatly reduces the analytical cost and could easily be adopted by laboratories equipped with a continuous-flow isotope ratio mass spectrometer. PMID:23014490

  7. Precision and Power Grip Priming by Observed Grasping

    ERIC Educational Resources Information Center

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-01-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power…

  8. Reproducibility Data on SUMMiT

    SciTech Connect

    Irwin, Lloyd; Jakubczak, Jay; Limary, Siv; McBrayer, John; Montague, Stephen; Smith, James; Sniegowski, Jeffry; Stewart, Harold; de Boer, Maarten

    1999-07-16

    SUMMiT (Sandia Ultra-planar Multi-level MEMS Technology) at the Sandia National Laboratories' MDL (Microelectronics Development Laboratory) is a standardized MEMS (Microelectromechanical Systems) technology that allows designers to fabricate concept prototypes. This technology provides four polysilicon layers plus three sacrificial oxide layers (with the third oxide layer being planarized) to enable fabrication of complex mechanical systems-on-a-chip. Quantified reproducibility of the SUMMiT process is important for process engineers as well as designers. Summary statistics for critical MEMS technology parameters such as film thickness, line width, and sheet resistance will be reported for the SUMMiT process. Additionally, data from Van der Pauw test structures will be presented. Data on film thickness, film uniformity and critical dimensions of etched line widths are collected from both process and monitor wafers during manufacturing using film thickness metrology tools and SEM tools. A standardized diagnostic module is included in each SWiT run to obtain post-processing parametric data to monitor run-to-run reproducibility such as Van der Pauw structures for measuring sheet resistance. This characterization of the SUMMiT process enables design for manufacturability in the SUMMiT technology.

  9. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  10. Reproducible and deterministic production of aspheres

    NASA Astrophysics Data System (ADS)

    Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian

    2015-10-01

    Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.

  11. Development of a facility for high-precision irradiation of cells with carbon ions

    SciTech Connect

    Goethem, Marc-Jan van; Niemantsverdriet, Maarten; Brandenburg, Sytze; Langendijk, Johannes A.; Coppes, Robert P.; Luijk, Peter van

    2011-01-15

    the irradiation of cell samples with the specified accuracy. Measurements of the transverse and longitudinal dose distribution showed that the dose variation over the sample volume was {+-}0.8% and {+-}0.7% in the lateral and longitudinal directions, respectively. The track-averaged LET of 132{+-}10 keV/{mu}m and dose-averaged LET of 189{+-}15 keV/{mu}m at the position of the sample were obtained from a GEANT4 simulation, which was validated experimentally. Three separately measured cell-survival curves yielded nearly identical results. Conclusions: With the new facility, high-precision carbon-ion irradiations of biological samples can be performed with highly reproducible results.

  12. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  13. Overview of the national precision database for ozone

    SciTech Connect

    Mikel, D.K.

    1999-07-01

    One of the most important ambient air monitoring quality assurance indicators is the precision test. Code of Federal Regulation Title 40, Section 58 (40 CFR 58) Appendix A1 states that all automated analyzers must have precision tests performed at least once every two weeks. Precision tests can be the best indicator of quality of data for the following reasons: Precision tests are performed once every two weeks. There are approximately 24 to 26 tests per year per instrument. Accuracy tests (audits) usually occur only 1--2 times per year. Precision tests and the subsequent statistical tests can be used to calculate the bias in a set of data. Precision test are used to calculate 95% confidence (probability) limits for the data set. This is important because the confidence of any data point can be determined. If the authors examine any exceedances or near exceedances of the ozone NAAQS, the confidence limits must be examined as well. Precision tests are performed by the monitoring staff and the precision standards are certified against the internal agency primary standards. Precision data are submitted by all state and local agencies that are required to submit criteria pollutant data to the Aerometric and Information Retrieval System (AIRS) database. This subset of the AIRS database is named Precision and Accuracy Retrieval Systems (PARS). In essence, the precision test is an internally performed test performed by the agency collecting and reporting the data.

  14. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  15. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  16. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  17. PSYCHOLOGY. Estimating the reproducibility of psychological science.

    PubMed

    2015-08-28

    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams. PMID:26315443

  18. PSYCHOLOGY. Estimating the reproducibility of psychological science.

    PubMed

    2015-08-28

    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

  19. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  20. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Robinson, L. B.; Wei, M. Z.; Borucki, W. J.; Dunham, E. W.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 10(exp 5). Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  1. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Dunham, E. W.; Wei, M. Z.; Robinson, L. B.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 105. Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  2. Is Grannum grading of the placenta reproducible?

    NASA Astrophysics Data System (ADS)

    Moran, Mary; Ryan, John; Brennan, Patrick C.; Higgins, Mary; McAuliffe, Fionnuala M.

    2009-02-01

    Current ultrasound assessment of placental calcification relies on Grannum grading. The aim of this study was to assess if this method is reproducible by measuring inter- and intra-observer variation in grading placental images, under strictly controlled viewing conditions. Thirty placental images were acquired and digitally saved. Five experienced sonographers independently graded the images on two separate occasions. In order to eliminate any technological factors which could affect data reliability and consistency all observers reviewed images at the same time. To optimise viewing conditions ambient lighting was maintained between 25-40 lux, with monitors calibrated to the GSDF standard to ensure consistent brightness and contrast. Kappa (κ) analysis of the grades assigned was used to measure inter- and intra-observer reliability. Intra-observer agreement had a moderate mean κ-value of 0.55, with individual comparisons ranging from 0.30 to 0.86. Two images saved from the same patient, during the same scan, were each graded as I, II and III by the same observer. A mean κ-value of 0.30 (range from 0.13 to 0.55) indicated fair inter-observer agreement over the two occasions and only one image was graded consistently the same by all five observers. The study findings confirmed the lack of reproducibility associated with Grannum grading of the placenta despite optimal viewing conditions and highlight the need for new methods of assessing placental health in order to improve neonatal outcomes. Alternative methods for quantifying placental calcification such as a software based technique and 3D ultrasound assessment need to be explored.

  3. Datathons and Software to Promote Reproducible Research

    PubMed Central

    2016-01-01

    Background Datathons facilitate collaboration between clinicians, statisticians, and data scientists in order to answer important clinical questions. Previous datathons have resulted in numerous publications of interest to the critical care community and serve as a viable model for interdisciplinary collaboration. Objective We report on an open-source software called Chatto that was created by members of our group, in the context of the second international Critical Care Datathon, held in September 2015. Methods Datathon participants formed teams to discuss potential research questions and the methods required to address them. They were provided with the Chatto suite of tools to facilitate their teamwork. Each multidisciplinary team spent the next 2 days with clinicians working alongside data scientists to write code, extract and analyze data, and reformulate their queries in real time as needed. All projects were then presented on the last day of the datathon to a panel of judges that consisted of clinicians and scientists. Results Use of Chatto was particularly effective in the datathon setting, enabling teams to reduce the time spent configuring their research environments to just a few minutes—a process that would normally take hours to days. Chatto continued to serve as a useful research tool after the conclusion of the datathon. Conclusions This suite of tools fulfills two purposes: (1) facilitation of interdisciplinary teamwork through archiving and version control of datasets, analytical code, and team discussions, and (2) advancement of research reproducibility by functioning postpublication as an online environment in which independent investigators can rerun or modify analyses with relative ease. With the introduction of Chatto, we hope to solve a variety of challenges presented by collaborative data mining projects while improving research reproducibility. PMID:27558834

  4. Modeling the spectrum of the 2ν2 and ν4 states of ammonia to experimental accuracy

    NASA Astrophysics Data System (ADS)

    Pearson, John C.; Yu, Shanshan; Pirali, Olivier

    2016-09-01

    The vibrational spectrum of ammonia has received an enormous amount of attention due to its potential prevalence in hot exo-planet atmospheres and persistent challenges in assigning and modeling highly excited and often highly perturbed states. Effective Hamiltonian models face challenges due to strong coupling between the large amplitude inversion and the other small amplitude vibrations. To date, only the ground and ν2 positions could be modeled to experimental accuracy using effective Hamiltonians. Several previous attempts to analyze the 2ν2 and ν4 energy levels failed to model both the microwave and infrared transitions to experimental accuracy. In this work, we performed extensive experimental measurements and data analysis for the 2ν2 and ν4 inversion-rotation and vibrational transitions. We measured 159 new transition frequencies with microwave precision and assigned 1680 new ones from existing Fourier transform spectra recorded in Synchrotron SOLEIL. The newly assigned data significantly expand the range of assigned quantum numbers; combined with all the previously published high-resolution data, the 2ν2 and ν4 states are reproduced to experimental accuracy using a global model described here. Achieving experimental accuracy required inclusion of a number of terms in the effective Hamiltonian that were neglected in previous work. These terms have also been neglected in the analysis of states higher than 2ν2 and ν4 suggesting that the inversion-rotation-vibration spectrum of ammonia may be far more tractable to effective Hamiltonians than previously believed.

  5. Cosputtered composition-spread reproducibility established by high-throughput x-ray fluorescence

    SciTech Connect

    Gregoire, John M.; Dale, Darren; Kazimirov, Alexander; DiSalvo, Francis J.; Dover, R. Bruce van

    2010-09-15

    We describe the characterization of sputtered yttria-zirconia composition spread thin films by x-ray fluorescence (XRF). We also discuss our automated analysis of the XRF data, which was collected in a high throughput experiment at the Cornell High Energy Synchrotron Source. The results indicate that both the composition reproducibility of the library deposition and the composition measurements have a precision of better than 1 atomic percent.

  6. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  7. Effect of slice orientation on reproducibility of fMRI motor activation at 3 Tesla.

    PubMed

    Gustard, S; Fadili, J; Williams, E J; Hall, L D; Carpenter, T A; Brett, M; Bullmore, E T

    2001-12-01

    The effect of slice orientation on reproducibility and sensitivity of 3T fMRI activation using a motor task has been investigated in six normal volunteers. Four slice orientations were used; axial, oblique axial, coronal and sagittal. We applied analysis of variance (ANOVA) to suprathreshold voxel statistics to quantify variability in activation between orientations and between subjects. We also assessed signal detection accuracy in voxels across the whole brain by using a finite mixture model to fit receiver operating characteristic (ROC) curves to the data. Preliminary findings suggest that suprathreshold cluster characteristics demonstrate high motor reproducibility across subjects and orientations, although a significant difference between slice orientations in number of activated voxels was demonstrated in left motor cortex but not cerebellum. Subtle inter-orientation differences are highlighted in the ROC analyses, which are not obvious by ANOVA; the oblique axial slice orientation offers the highest signal detection accuracy, whereas coronal slices give the lowest.

  8. Durations of extended mental rehearsals are remarkably reproducible in higher level human performances.

    PubMed

    Brothers, L; Shaw, G L; Wright, E L

    1993-12-01

    It has been extremely difficult to quantify temporal aspects of higher level human brain function. We have found that mental rehearsals of musical performance of several minutes duration provide such a measure in that they can be highly reproducible, varying to less than 1%. These remarkable results pose fundamental neurophysiological problems. It is necessary to understand the underlying neuronal bases for this accuracy in the spatial-temporal activity of billions of neurons over minutes without sensory input. Further, they present a powerful constraint on neuronal models of brain function. Such highly reproducible (in duration) mental rehearsals might be used in conjunction with multielectrode EEG recordings to look for reproducible spatial-temporal patterns. Further, we suggest that our results may provide an extremely useful behavioural correlate for high level performance.

  9. Precise Orbit Determination of GPS Satellites Using Phase Observables

    NASA Astrophysics Data System (ADS)

    Jee, Myung-Kook; Choi, Kyu-Hong; Park, Pil-Ho

    1997-12-01

    The accuracy of user position by GPS is heavily dependent upon the accuracy of satellite position which is usually transmitted to GPS users in radio signals. The real-time satellite position information directly obtained from broadcast ephimerides has the accuracy of 3 x 10 meters which is very unsatisfactory to measure 100km baseline to the accuracy of less than a few mili-meters. There are globally at present seven orbit analysis centers capable of generating precise GPS ephimerides and their orbit quality is of the order of about 10cm. Therefore, precise orbit model and phase processing technique were reviewed and consequently precise GPS ephimerides were produced after processing the phase observables of 28 global GPS stations for 1 day. Initial 6 orbit parameters and 2 solar radiation coefficients were estimated using batch least square algorithm and the final results were compared with the orbit of IGS, the International GPS Service for Geodynamics.

  10. Consideration of shear modulus in biomechanical analysis of peri-implant jaw bone: accuracy verification using image-based multi-scale simulation.

    PubMed

    Matsunaga, Satoru; Naito, Hiroyoshi; Tamatsu, Yuichi; Takano, Naoki; Abe, Shinichi; Ide, Yoshinobu

    2013-01-01

    The aim of this study was to clarify the influence of shear modulus on the analytical accuracy in peri-implant jaw bone simulation. A 3D finite element (FE) model was prepared based on micro-CT data obtained from images of a jawbone containing implants. A precise model that closely reproduced the trabecular architecture, and equivalent models that gave shear modulus values taking the trabecular architecture into account, were prepared. Displacement norms during loading were calculated, and the displacement error was evaluated. The model that gave shear modulus values taking the trabecular architecture into account showed an analytical error of around 10-20% in the cancellous bone region, while in the model that used incorrect shear modulus, the analytical error exceeded 40% in certain regions. The shear modulus should be evaluated precisely in addition to the Young modulus when considering the mechanics of peri-implant trabecular bone structure.

  11. Ion channel stochasticity may be critical in determining the reliability and precision of spike timing.

    PubMed

    Schneidman, E; Freedman, B; Segev, I

    1998-10-01

    The firing reliability and precision of an isopotential membrane patch consisting of a realistically large number of ion channels is investigated using a stochastic Hodgkin-Huxley (HH) model. In sharp contrast to the deterministic HH model, the biophysically inspired stochastic model reproduces qualitatively the different reliability and precision characteristics of spike firing in response to DC and fluctuating current input in neocortical neurons, as reported by Mainen & Sejnowski (1995). For DC inputs, spike timing is highly unreliable; the reliability and precision are significantly increased for fluctuating current input. This behavior is critically determined by the relatively small number of excitable channels that are opened near threshold for spike firing rather than by the total number of channels that exist in the membrane patch. Channel fluctuations, together with the inherent bistability in the HH equations, give rise to three additional experimentally observed phenomena: subthreshold oscillations in the membrane voltage for DC input, "spontaneous" spikes for subthreshold inputs, and "missing" spikes for suprathreshold inputs. We suggest that the noise inherent in the operation of ion channels enables neurons to act as "smart" encoders. Slowly varying, uncorrelated inputs are coded with low reliability and accuracy and, hence, the information about such inputs is encoded almost exclusively by the spike rate. On the other hand, correlated presynaptic activity produces sharp fluctuations in the input to the postsynaptic cell, which are then encoded with high reliability and accuracy. In this case, information about the input exists in the exact timing of the spikes. We conclude that channel stochasticity should be considered in realistic models of neurons.

  12. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    PubMed Central

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  13. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  14. Precision-controlled elution of a 82Sr/82Rb generator for cardiac perfusion imaging with positron emission tomography

    NASA Astrophysics Data System (ADS)

    Klein, R.; Adler, A.; Beanlands, R. S.; de Kemp, R. A.

    2007-02-01

    A rubidium-82 (82Rb) elution system is described for use with positron emission tomography. Due to the short half-life of 82Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a 82Sr/82Rb generator and a bypass line to achieve a constant-activity elution of 82Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The 82Rb elution system produces accurate and reproducible constant-activity elution profiles of 82Rb activity, independent of parent 82Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using 82Rb.

  15. REPRODUCIBLE AND SHAREABLE QUANTIFICATIONS OF PATHOGENICITY

    PubMed Central

    Manrai, Arjun K; Wang, Brice L; Patel, Chirag J; Kohane, Isaac S

    2016-01-01

    There are now hundreds of thousands of pathogenicity assertions that relate genetic variation to disease, but most of this clinically utilized variation has no accepted quantitative disease risk estimate. Recent disease-specific studies have used control sequence data to reclassify large amounts of prior pathogenic variation, but there is a critical need to scale up both the pace and feasibility of such pathogenicity reassessments across human disease. In this manuscript we develop a shareable computational framework to quantify pathogenicity assertions. We release a reproducible “digital notebook” that integrates executable code, text annotations, and mathematical expressions in a freely accessible statistical environment. We extend previous disease-specific pathogenicity assessments to over 6,000 diseases and 160,000 assertions in the ClinVar database. Investigators can use this platform to prioritize variants for reassessment and tailor genetic model parameters (such as prevalence and heterogeneity) to expose the uncertainty underlying pathogenicity-based risk assessments. Finally, we release a website that links users to pathogenic variation for a queried disease, supporting literature, and implied disease risk calculations subject to user-defined and disease-specific genetic risk models in order to facilitate variant reassessments. PMID:26776189

  16. Laboratory 20-km cycle time trial reproducibility.

    PubMed

    Zavorsky, G S; Murias, J M; Gow, J; Kim, D J; Poulin-Harnois, C; Kubow, S; Lands, L C

    2007-09-01

    This study evaluated the reproducibility of laboratory based 20-km time trials in well trained versus recreational cyclists. Eighteen cyclists (age = 34 +/- 8 yrs; body mass index = 23.1 +/- 2.2 kg/m (2); VO(2max) = 4.19 +/- 0.65 L/min) completed three 20-km time trials over a month on a Velotron cycle ergometer. Average power output (PO) (W), speed, and heart rate (HR) were significantly lower in the first time trial compared to the second and third time trial. The coefficients of variation (CV) between the second and third trial of the top eight performers for average PO, time to completion, and speed were 1.2 %, 0.6 %, 0.5 %, respectively, compared to 4.8 %, 2.0 %, and 2.3 % for the bottom ten. In addition, the average HR, VO(2), and percentage of VO(2max) were similar between trials. This study demonstrated that (1) a familiarization session improves the reliability of the measurements (i.e., average PO, time to completion and speed), and (2) the CV was much smaller for the best performers.

  17. The reproducible radio outbursts of SS Cygni

    NASA Astrophysics Data System (ADS)

    Russell, T. D.; Miller-Jones, J. C. A.; Sivakoff, G. R.; Altamirano, D.; O'Brien, T. J.; Page, K. L.; Templeton, M. R.; Körding, E. G.; Knigge, C.; Rupen, M. P.; Fender, R. P.; Heinz, S.; Maitra, D.; Markoff, S.; Migliari, S.; Remillard, R. A.; Russell, D. M.; Sarazin, C. L.; Waagen, E. O.

    2016-08-01

    We present the results of our intensive radio observing campaign of the dwarf nova SS Cyg during its 2010 April outburst. We argue that the observed radio emission was produced by synchrotron emission from a transient radio jet. Comparing the radio light curves from previous and subsequent outbursts of this system (including high-resolution observations from outbursts in 2011 and 2012) shows that the typical long and short outbursts of this system exhibit reproducible radio outbursts that do not vary significantly between outbursts, which is consistent with the similarity of the observed optical, ultraviolet and X-ray light curves. Contemporaneous optical and X-ray observations show that the radio emission appears to have been triggered at the same time as the initial X-ray flare, which occurs as disc material first reaches the boundary layer. This raises the possibility that the boundary region may be involved in jet production in accreting white dwarf systems. Our high spatial resolution monitoring shows that the compact jet remained active throughout the outburst with no radio quenching.

  18. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  19. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  20. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  1. Reproducibility and reliability of fetal cardiac time intervals using magnetocardiography.

    PubMed

    van Leeuwen, P; Lange, S; Klein, A; Geue, D; Zhang, Y; Krause, H J; Grönemeyer, D

    2004-04-01

    We investigated several factors which may affect the accuracy of fetal cardiac time intervals (CTI) determined in magnetocardiographic (MCG) recordings: observer differences, the number of available recording sites and the type of sensor used in acquisition. In 253 fetal MCG recordings, acquired using different biomagnetometer devices between the 15th and 42nd weeks of gestation, P-wave, QRS complex and T-wave onsets and ends were identified in signal averaged data sets independently by different observers. Using a defined procedure for setting signal events, interobserver reliability was high. Increasing the number of registration sites led to more accurate identification of the events. The differences in wave morphology between magnetometer and gradiometer configurations led to deviations in timing whereas the differences between low and high temperature devices seemed to be primarily due to noise. Signal-to-noise ratio played an important overall role in the accurate determination of CTI and changes in signal amplitude associated with fetal maturation may largely explain the effects of gestational age on reproducibility. As fetal CTI may be of value in the identification of pathologies such as intrauterine growth retardation or fetal cardiac hypertrophy, their reliable estimation will be enhanced by strategies which take these factors into account.

  2. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  3. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  4. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  5. Experimental challenges to reproduce seismic fault motion

    NASA Astrophysics Data System (ADS)

    Shimamoto, T.

    2011-12-01

    This presentation briefly reviews scientific and technical development in the studies of intermediate to high-velocity frictional properties of faults and summarizes remaining technical challenges to reproduce nucleation to growth processes of large earthquakes in laboratory. Nearly 10 high-velocity or low to high-velocity friction apparatuses have been built in the last several years in the world and it has become possible now to produce sub-plate velocity to seismic slip rate in a single machine. Despite spreading of high-velocity friction studies, reproducing seismic fault motion at high P and T conditions to cover the entire seismogenic zone is still a big challenge. Previous studies focused on (1) frictional melting, (2) thermal pressurization, and (3) high-velocity gouge behavior without frictional melting. Frictional melting process was solved as a Stefan problem with very good agreement with experimental results. Thermal pressurization has been solved theoretically based on measured transport properties and has been included successfully in the modeling of earthquake generation. High-velocity gouge experiments in the last several years have revealed that a wide variety of gouges exhibit dramatic weakening at high velocities (e.g., Di Toro et al., 2011, Nature). Most gouge experiments were done under dry conditions partly to separate gouge friction from the involvement of thermal pressurization. However, recent studies demonstrated that dehydration or degassing due to mineral decomposition can occur during seismic fault motion. Those results not only provided a new view of looking at natural fault zones in search of geological evidence of seismic fault motion, but also indicated that thermal pressurization and gouge weakening can occur simultaneously even in initially dry gouge. Thus experiments with controlled pore pressure are needed. I have struggled to make a pressure vessel for wet high-velocity experiments in the last several years. A technical

  6. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  7. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  8. Reproducibility and utility of dune luminescence chronologies

    NASA Astrophysics Data System (ADS)

    Leighton, Carly L.; Thomas, David S. G.; Bailey, Richard M.

    2014-02-01

    Optically stimulated luminescence (OSL) dating of dune deposits has increasingly been used as a tool to investigate the response of aeolian systems to environmental change. Amalgamation of individual dune accumulation chronologies has been employed in order to distinguish regional from local geomorphic responses to change. However, advances in dating have produced chronologies of increasing complexity. In particular, questions regarding the interpretation of dune ages have been raised, including over the most appropriate method to evaluate the significance of suites of OSL ages when local 'noisy' and discontinuous records are combined. In this paper, these issues are reviewed and the reproducibility of dune chronologies is assessed. OSL ages from two cores sampled from the same dune in the northeast Rub' al Khali, United Arab Emirates, are presented and compared, alongside an analysis of previously published dune ages dated to within the last 30 ka. Distinct periods of aeolian activity and preservation are identified, which can be tied to regional climatic and environmental changes. This case study is used to address fundamental questions that are persistently asked of dune dating studies, including the appropriate spatial scale over which to infer environmental and climatic change based on dune chronologies, whether chronological hiatuses can be interpreted, how to most appropriately combine and display datasets, and the relationship between geomorphic and palaeoclimatic signals. Chronological profiles reflect localised responses to environmental variability and climatic forcing, and amalgamation of datasets, with consideration of sampling resolution, is required; otherwise local factors are always likely to dominate. Using net accumulation rates to display ages may provide an informative approach of analysing and presenting dune OSL chronologies less susceptible to biases resulting from insufficient sampling resolution.

  9. Within-patient reproducibility of the aldosterone: renin ratio in primary aldosteronism.

    PubMed

    Rossi, Gian Paolo; Seccia, Teresa Maria; Palumbo, Gaetana; Belfiore, Anna; Bernini, Giampaolo; Caridi, Graziella; Desideri, Giovambattista; Fabris, Bruno; Ferri, Claudio; Giacchetti, Gilberta; Letizia, Claudio; Maccario, Mauro; Mallamaci, Francesca; Mannelli, Massimo; Patalano, Anna; Rizzoni, Damiano; Rossi, Ermanno; Pessina, Achille Cesare; Mantero, Franco

    2010-01-01

    The plasma aldosterone concentration:renin ratio (ARR) is widely used for the screening of primary aldosteronism, but its reproducibility is unknown. We, therefore, investigated the within-patient reproducibility of the ARR in a prospective multicenter study of consecutive hypertensive patients referred to specialized centers for hypertension in Italy. After the patients were carefully prepared from the pharmacological standpoint, the ARR was determined at baseline in 1136 patients and repeated after, on average, 4 weeks in the patients who had initially an ARR > or =40 and in 1 of every 4 of those with an ARR <40. The reproducibility of the ARR was assessed with Passing and Bablok and Deming regression, coefficient of reproducibility, and Bland-Altman and Mountain plots. Within-patient ARR comparison was available in 268 patients, of whom 49 had an aldosterone-producing adenoma, on the basis of the "4-corner criteria." The ARR showed a highly significant within-patient correlation (r=0.69; P<0.0001) and reproducibility. Bland-Altman plot showed no proportional, magnitude-related, or absolute systematic error between the ARR; moreover, only 7% of the values, for example, slightly more than what could be expected by chance, fell out of the 95% CI for the between-test difference. The accuracy of each ARR for pinpointing aldosterone-producing adenoma patients was approximately 80%. Thus, although it was performed under different conditions in a multicenter study, the ARR showed a good within-patient reproducibility. Hence, contrary to previously claimed poor reproducibility of the ARR, these data support its use for the screening of primary aldosteronism. PMID:19933925

  10. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  11. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  12. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  13. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  14. High Precision Astrometry

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2012-10-01

    This |*|program |*|uses |*|the |*|enhanced |*|astrometric |*|precision |*|enabled |*|by |*|spatial |*|scanning |*|to |*|calibrate |*|remaining |*|obstacles |*|toreaching |*|<<40 |*|microarc|*|second |*|astrometry |*|{<1 |*|millipixel} |*|with |*|WFC3/UVIS |*|by |*|1} |*|improving |*|geometric |*|distor-on |*|2} |*|calibratingthe |*|e|*|ect |*|of |*|breathing |*|on |*|astrometry|*|3} |*|calibrating |*|the |*|e|*|ect |*|of |*|CTE |*|on |*|astrometry, |*|4} |*|characterizing |*|the |*|boundaries |*|andorientations |*|of |*|the |*|WFC3 |*|lithograph |*|cells.

  15. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  16. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  17. Precision Falling Body Experiment

    ERIC Educational Resources Information Center

    Blackburn, James A.; Koenig, R.

    1976-01-01

    Described is a simple apparatus to determine acceleration due to gravity. It utilizes direct contact switches in lieu of conventional photocells to time the fall of a ball bearing. Accuracies to better than one part in a thousand were obtained. (SL)

  18. Selection and use of TLDS for high precision NERVA shielding measurements

    NASA Technical Reports Server (NTRS)

    Woodsum, H. C.

    1972-01-01

    An experimental evaluation of thermoluminescent dosimeters was performed in order to select high precision dosimeters for a study whose purpose is to measure gamma streaming through the coolant passages of a simulated flight type internal NERVA reactor shield. Based on this study, the CaF2 chip TLDs are the most reproducible dosimeters with reproducibility generally within a few percent, but none of the TLDs tested met the reproducibility criterion of plus or minus 2%.

  19. Measurement accuracy and Cerenkov removal for high performance, high spatial resolution scintillation dosimetry

    SciTech Connect

    Archambault, Louis; Beddar, A. Sam; Gingras, Luc

    2006-01-15

    With highly conformal radiation therapy techniques such as intensity-modulated radiation therapy, radiosurgery, and tomotherapy becoming more common in clinical practice, the use of these narrow beams requires a higher level of precision in quality assurance and dosimetry. Plastic scintillators with their water equivalence, energy independence, and dose rate linearity have been shown to possess excellent qualities that suit the most complex and demanding radiation therapy treatment plans. The primary disadvantage of plastic scintillators is the presence of Cerenkov radiation generated in the light guide, which results in an undesired stem effect. Several techniques have been proposed to minimize this effect. In this study, we compared three such techniques--background subtraction, simple filtering, and chromatic removal--in terms of reproducibility and dose accuracy as gauges of their ability to remove the Cerenkov stem effect from the dose signal. The dosimeter used in this study comprised a 6-mm{sup 3} plastic scintillating fiber probe, an optical fiber, and a color charge-coupled device camera. The whole system was shown to be linear and the total light collected by the camera was reproducible to within 0.31% for 5-s integration time. Background subtraction and chromatic removal were both found to be suitable for precise dose evaluation, with average absolute dose discrepancies of 0.52% and 0.67%, respectively, from ion chamber values. Background subtraction required two optical fibers, but chromatic removal used only one, thereby preventing possible measurement artifacts when a strong dose gradient was perpendicular to the optical fiber. Our findings showed that a plastic scintillation dosimeter could be made free of the effect of Cerenkov radiation.

  20. Aiming for benchmark accuracy with the many-body expansion.

    PubMed

    Richard, Ryan M; Lao, Ka Un; Herbert, John M

    2014-09-16

    Conspectus The past 15 years have witnessed an explosion of activity in the field of fragment-based quantum chemistry, whereby ab initio electronic structure calculations are performed on very large systems by decomposing them into a large number of relatively small subsystem calculations and then reassembling the subsystem data in order to approximate supersystem properties. Most of these methods are based, at some level, on the so-called many-body (or "n-body") expansion, which ultimately requires calculations on monomers, dimers, ..., n-mers of fragments. To the extent that a low-order n-body expansion can reproduce supersystem properties, such methods replace an intractable supersystem calculation with a large number of easily distributable subsystem calculations. This holds great promise for performing, for example, "gold standard" CCSD(T) calculations on large molecules, clusters, and condensed-phase systems. The literature is awash in a litany of fragment-based methods, each with their own working equations and terminology, which presents a formidable language barrier to the uninitiated reader. We have sought to unify these methods under a common formalism, by means of a generalized many-body expansion that provides a universal energy formula encompassing not only traditional n-body cluster expansions but also methods designed for macromolecules, in which the supersystem is decomposed into overlapping fragments. This formalism allows various fragment-based methods to be systematically classified, primarily according to how the fragments are constructed and how higher-order n-body interactions are approximated. This classification furthermore suggests systematic ways to improve the accuracy. Whereas n-body approaches have been thoroughly tested at low levels of theory in small noncovalent clusters, we have begun to explore the efficacy of these methods for large systems, with the goal of reproducing benchmark-quality calculations, ideally meaning complete

  1. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  2. Research Reproducibility in Geosciences: Current Landscape, Practices and Perspectives

    NASA Astrophysics Data System (ADS)

    Yan, An

    2016-04-01

    Reproducibility of research can gauge the validity of its findings. Yet currently we lack understanding of how much of a problem research reproducibility is in geosciences. We developed an online survey on faculty and graduate students in geosciences, and received 136 responses from research institutions and universities in Americas, Asia, Europe and other parts of the world. This survey examined (1) the current state of research reproducibility in geosciences by asking researchers' experiences with unsuccessful replication work, and what obstacles that lead to their replication failures; (2) the current reproducibility practices in community by asking what efforts researchers made to try to reproduce other's work and make their own work reproducible, and what the underlying factors that contribute to irreproducibility are; (3) the perspectives on reproducibility by collecting researcher's thoughts and opinions on this issue. The survey result indicated that nearly 80% of respondents who had ever reproduced a published study had failed at least one time in reproducing. Only one third of the respondents received helpful feedbacks when they contacted the authors of a published study for data, code, or other information. The primary factors that lead to unsuccessful replication attempts are insufficient details of instructions in published literature, and inaccessibility of data, code and tools needed in the study. Our findings suggest a remarkable lack of research reproducibility in geoscience. Changing the incentive mechanism in academia, as well as developing policies and tools that facilitate open data and code sharing are the promising ways for geosciences community to alleviate this reproducibility problem.

  3. Semiautomated, Reproducible Batch Processing of Soy

    NASA Technical Reports Server (NTRS)

    Thoerne, Mary; Byford, Ivan W.; Chastain, Jack W.; Swango, Beverly E.

    2005-01-01

    A computer-controlled apparatus processes batches of soybeans into one or more of a variety of food products, under conditions that can be chosen by the user and reproduced from batch to batch. Examples of products include soy milk, tofu, okara (an insoluble protein and fiber byproduct of soy milk), and whey. Most processing steps take place without intervention by the user. This apparatus was developed for use in research on processing of soy. It is also a prototype of other soy-processing apparatuses for research, industrial, and home use. Prior soy-processing equipment includes household devices that automatically produce soy milk but do not automatically produce tofu. The designs of prior soy-processing equipment require users to manually transfer intermediate solid soy products and to press them manually and, hence, under conditions that are not consistent from batch to batch. Prior designs do not afford choices of processing conditions: Users cannot use previously developed soy-processing equipment to investigate the effects of variations of techniques used to produce soy milk (e.g., cold grinding, hot grinding, and pre-cook blanching) and of such process parameters as cooking times and temperatures, grinding times, soaking times and temperatures, rinsing conditions, and sizes of particles generated by grinding. In contrast, the present apparatus is amenable to such investigations. The apparatus (see figure) includes a processing tank and a jacketed holding or coagulation tank. The processing tank can be capped by either of two different heads and can contain either of two different insertable mesh baskets. The first head includes a grinding blade and heating elements. The second head includes an automated press piston. One mesh basket, designated the okara basket, has oblong holes with a size equivalent to about 40 mesh [40 openings per inch (.16 openings per centimeter)]. The second mesh basket, designated the tofu basket, has holes of 70 mesh [70 openings

  4. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  5. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  6. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  7. Toward plasmonics with nanometer precision: nonlinear optics of helium-ion milled gold nanoantennas.

    PubMed

    Kollmann, Heiko; Piao, Xianji; Esmann, Martin; Becker, Simon F; Hou, Dongchao; Huynh, Chuong; Kautschor, Lars-Oliver; Bösker, Guido; Vieker, Henning; Beyer, André; Gölzhäuser, Armin; Park, Namkyoo; Vogelgesang, Ralf; Silies, Martin; Lienau, Christoph

    2014-08-13

    Plasmonic nanoantennas are versatile tools for coherently controlling and directing light on the nanoscale. For these antennas, current fabrication techniques such as electron beam lithography (EBL) or focused ion beam (FIB) milling with Ga(+)-ions routinely achieve feature sizes in the 10 nm range. However, they suffer increasingly from inherent limitations when a precision of single nanometers down to atomic length scales is required, where exciting quantum mechanical effects are expected to affect the nanoantenna optics. Here, we demonstrate that a combined approach of Ga(+)-FIB and milling-based He(+)-ion lithography (HIL) for the fabrication of nanoantennas offers to readily overcome some of these limitations. Gold bowtie antennas with 6 nm gap size were fabricated with single-nanometer accuracy and high reproducibility. Using third harmonic (TH) spectroscopy, we find a substantial enhancement of the nonlinear emission intensity of single HIL-antennas compared to those produced by state-of-the-art gallium-based milling. Moreover, HIL-antennas show a vastly improved polarization contrast. This superior nonlinear performance of HIL-derived plasmonic structures is an excellent testimonial to the application of He(+)-ion beam milling for ultrahigh precision nanofabrication, which in turn can be viewed as a stepping stone to mastering quantum optical investigations in the near-field.

  8. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  9. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  10. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  11. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  12. Precision orbit determination of altimetric satellites

    NASA Astrophysics Data System (ADS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-11-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  13. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  14. Precise clock synchronization protocol

    NASA Astrophysics Data System (ADS)

    Luit, E. J.; Martin, J. M. M.

    1993-12-01

    A distributed clock synchronization protocol is presented which achieves a very high precision without the need for very frequent resynchronizations. The protocol tolerates failures of the clocks: clocks may be too slow or too fast, exhibit omission failures and report inconsistent values. Synchronization takes place in synchronization rounds as in many other synchronization protocols. At the end of each round, clock times are exchanged between the clocks. Each clock applies a convergence function (CF) to the values obtained. This function estimates the difference between its clock and an average clock and corrects its clock accordingly. Clocks are corrected for drift relative to this average clock during the next synchronization round. The protocol is based on the assumption that clock reading errors are small with respect to the required precision of synchronization. It is shown that the CF resynchronizes the clocks with high precision even when relatively large clock drifts are possible. It is also shown that the drift-corrected clocks remain synchronized until the end of the next synchronization round. The stability of the protocol is proven.

  15. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-07-01

    The Large Electron-Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while SUSY provides an excellent candidate for dark matter. In addition, SUSY removes the quadratic divergencies of the SM and predicts the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  16. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-09-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  17. Standardization of radon measurements. 2. Accuracy and proficiency testing

    SciTech Connect

    Matuszek, J.M.

    1990-01-01

    The accuracy of in situ environmental radon measurement techniques is reviewed and new data for charcoal canister, alpha-track (track-etch) and electret detectors are presented. Deficiencies reported at the 1987 meeting in Wurenlingen, Federal Republic of Germany, for measurements using charcoal detectors are confirmed by the new results. Accuracy and precision of the alpha-track measurements laboratory were better than in 1987. Electret detectors appear to provide a convenient, accurate, and precise system for the measurement of radon concentration. The need for a comprehensive, blind proficiency-testing programs is discussed.

  18. A novel methodology to reproduce previously recorded six-degree of freedom kinematics on the same diarthrodial joint.

    PubMed

    Moore, Susan M; Thomas, Maribeth; Woo, Savio L-Y; Gabriel, Mary T; Kilger, Robert; Debski, Richard E

    2006-01-01

    The objective of this study was to develop a novel method to more accurately reproduce previously recorded 6-DOF kinematics of the tibia with respect to the femur using robotic technology. Furthermore, the effect of performing only a single or multiple registrations and the effect of robot joint configuration were investigated. A single registration consisted of registering the tibia and femur with respect to the robot at full extension and reproducing all kinematics while multiple registrations consisted of registering the bones at each flexion angle and reproducing only the kinematics of the corresponding flexion angle. Kinematics of the knee in response to an anterior (134 N) and combined internal/external (+/-10 N m) and varus/valgus (+/-5 N m) loads were collected at 0 degrees , 15 degrees , 30 degrees , 60 degrees , and 90 degrees of flexion. A six axes, serial-articulated robotic manipulator (PUMA Model 762) was calibrated and the working volume was reduced to improve the robot's accuracy. The effect of the robot joint configuration was determined by performing single and multiple registrations for three selected configurations. For each robot joint configuration, the accuracy in position of the reproduced kinematics improved after multiple registrations (0.7+/-0.3, 1.2+/-0.5, and 0.9+/-0.2 mm, respectively) when compared to only a single registration (1.3+/-0.9, 2.0+/-1.0, and 1.5+/-0.7 mm, respectively) (p<0.05). The accuracy in position of each robot joint configuration was unique as significant differences were detected between each of the configurations. These data demonstrate that the number of registrations and the robot joint configuration both affect the accuracy of the reproduced kinematics. Therefore, when using robotic technology to reproduce previously recorded kinematics, it may be necessary to perform these analyses for each individual robotic system and for each diarthrodial joint, as different joints will require the robot to be placed in

  19. Reproducing American Sign Language sentences: cognitive scaffolding in working memory.

    PubMed

    Supalla, Ted; Hauser, Peter C; Bavelier, Daphne

    2014-01-01

    The American Sign Language Sentence Reproduction Test (ASL-SRT) requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall) and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects' recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies when they failed to recall the sentence correctly. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are considered. PMID

  20. Enhancing reproducibility of ultrasonic measurements by new users

    NASA Astrophysics Data System (ADS)

    Pramanik, Manojit; Gupta, Madhumita; Krishnan, Kajoli Banerjee

    2013-03-01

    Perception of operator influences ultrasound image acquisition and processing. Lower costs are attracting new users to medical ultrasound. Anticipating an increase in this trend, we conducted a study to quantify the variability in ultrasonic measurements made by novice users and identify methods to reduce it. We designed a protocol with four presets and trained four new users to scan and manually measure the head circumference of a fetal phantom with an ultrasound scanner. In the first phase, the users followed this protocol in seven distinct sessions. They then received feedback on the quality of the scans from an expert. In the second phase, two of the users repeated the entire protocol aided by visual cues provided to them during scanning. We performed off-line measurements on all the images using a fully automated algorithm capable of measuring the head circumference from fetal phantom images. The ground truth (198.1±1.6 mm) was based on sixteen scans and measurements made by an expert. Our analysis shows that: (1) the inter-observer variability of manual measurements was 5.5 mm, whereas the inter-observer variability of automated measurements was only 0.6 mm in the first phase (2) consistency of image appearance improved and mean manual measurements was 4-5 mm closer to the ground truth in the second phase (3) automated measurements were more precise, accurate and less sensitive to different presets compared to manual measurements in both phases. Our results show that visual aids and automation can bring more reproducibility to ultrasonic measurements made by new users.

  1. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  2. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2015-02-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.

  3. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2014-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale. A new approach for modelling soil erosion at large spatial scale is here proposed. It is based on the joint use of low data demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available datasets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country level statistics of pre-existing European maps of soil erosion by water is also provided.

  4. Precisely Patterned Growth of Ultra-Long Single-Crystalline Organic Microwire Arrays for Near-Infrared Photodetectors.

    PubMed

    Wang, Hui; Deng, Wei; Huang, Liming; Zhang, Xiujuan; Jie, Jiansheng

    2016-03-01

    Owing to extraordinary properties, small-molecule organic micro/nanocrystals are identified to be prospective system to construct new-generation organic electronic and optoelectronic devices. Alignment and patterning of organic micro/nanocrystals at desired locations are prerequisite for their device applications in practice. Though various methods have been developed to control their directional growth and alignment, high-throughput precise positioning and patterning of the organic micro/nanocrystals at desired locations remains a challenge. Here, we report a photoresist-assisted evaporation method for large-area growth of precisely positioned ultralong methyl-squarylium (MeSq) microwire (MW) arrays. Positions as well as alignment densities of the MWs can be precisely controlled with the aid of the photoresist-template that fabricated by photolithography process. This strategy enables large-scale fabrication of organic MW arrays with nearly the same accuracy, uniformity, and reliability as photolithography. Near-infrared (NIR) photodetectors based on the MeSq MW arrays show excellent photoresponse behavior and are capable of detecting 808 nm light with high stability and reproducibility. The high on/off ratio of 1600 is significantly better than other organic nanostructure-based optical switchers. More importantly, this strategy can be readily extended to other organic molecules, revealing the great potential of photoresist-assisted evaporation method for future high-performance organic optoelectronic devices.

  5. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  6. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  7. Reproducibility of cerebral tissue oxygen saturation measurements by near-infrared spectroscopy in newborn infants

    NASA Astrophysics Data System (ADS)

    Jenny, Carmen; Biallas, Martin; Trajkovic, Ivo; Fauchère, Jean-Claude; Bucher, Hans Ulrich; Wolf, Martin

    2011-09-01

    Early detection of cerebral hypoxemia is an important aim in neonatology. A relevant parameter to assess brain oxygenation may be the cerebral tissue oxygen saturation (StO2) measured by near-infrared spectroscopy (NIRS). So far the reproducibility of StO2 measurements was too low for clinical application, probably due to inhomogeneities. The aim of this study was to test a novel sensor geometry which reduces the influence of inhomogeneities. Thirty clinically stable newborn infants, with a gestational age of median 33.9 (range 26.9 to 41.9) weeks, birth weight of 2220 (820 to 4230) g, postnatal age of 5 (1 to 71) days were studied. At least four StO2 measurements of 1 min duration were carried out using NIRS on the lateral head. The sensor was repositioned between measurements. Reproducibility was calculated by a linear mixed effects model. The mean StO2 was 79.99 +/- 4.47% with a reproducibility of 2.76% and a between-infant variability of 4.20%. Thus, the error of measurement only accounts for 30.1% of the variability. The novel sensor geometry leads to considerably more precise measurements compared to previous studies with, e.g., ~5% reproducibility for the NIRO 300. The novel StO2 values hence have a higher clinical relevance.

  8. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity.

  9. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  10. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  11. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  12. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  13. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  14. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  15. On the Validation of ENSEMBLES Regional Climate Simulations in Terms of Reproducing Annual Cycle

    NASA Astrophysics Data System (ADS)

    Halenka, T.; Skalak, P.; Huszar, P.; Belda, M.

    2009-09-01

    There are many aspects of the validation of climate models. In addition to standard statistical characteristics a more in-depth analysis of annual cycle performance can provide more information on ability of the models to reproduce properly the physical processes which strongly affect the behavior of climate parameters during the year. Global Circulation Models (GCMs) can reproduce climate features on large scales, but their accuracy decreases when proceeding from continental to regional and local scales because of the lack of resolution and thus on the regional scale they are very often rather poor in reproducing the annual cycle. The more detail analysis of 15 RCMs used in EC FP6 IP ENSEMBLES in ERA 40 driven experiment on 25 km resolution for the period of 1961-2000 in different PRUDENCE regions presents the comparison of the models and their validation in terms of annual cycle reproduction. While for the temperature the performance of the models is mostly very good and quite consistent, there are some models with rather significant problems in some regions in reproducing annual cycle of precipitation.

  16. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer

    PubMed Central

    KONDO, Kosuke; NEMOTO, Masaaki; MASUDA, Hiroyuki; OKONOGI, Shinichi; NOMOTO, Jun; HARADA, Naoyuki; SUGO, Nobuo; MIYAZAKI, Chikao

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p < 0.001). A significant difference was also noted in the longitudinal diameter of the cerebral aneurysm (p < 0.01). Regarding the CTA image as the gold standard, reproducibility of the microsurgical anatomy of skull bone and main arteries was favorable in the rapid prototyping models prepared using a 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors. PMID:26119896

  17. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer.

    PubMed

    Kondo, Kosuke; Nemoto, Masaaki; Masuda, Hiroyuki; Okonogi, Shinichi; Nomoto, Jun; Harada, Naoyuki; Sugo, Nobuo; Miyazaki, Chikao

    2015-01-01

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p < 0.001). A significant difference was also noted in the longitudinal diameter of the cerebral aneurysm (p < 0.01). Regarding the CTA image as the gold standard, reproducibility of the microsurgical anatomy of skull bone and main arteries was favorable in the rapid prototyping models prepared using a 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors.

  18. Indirect monitoring shot-to-shot shock waves strength reproducibility during pump-probe experiments

    NASA Astrophysics Data System (ADS)

    Pikuz, T. A.; Faenov, A. Ya.; Ozaki, N.; Hartley, N. J.; Albertazzi, B.; Matsuoka, T.; Takahashi, K.; Habara, H.; Tange, Y.; Matsuyama, S.; Yamauchi, K.; Ochante, R.; Sueda, K.; Sakata, O.; Sekine, T.; Sato, T.; Umeda, Y.; Inubushi, Y.; Yabuuchi, T.; Togashi, T.; Katayama, T.; Yabashi, M.; Harmand, M.; Morard, G.; Koenig, M.; Zhakhovsky, V.; Inogamov, N.; Safronova, A. S.; Stafford, A.; Skobelev, I. Yu.; Pikuz, S. A.; Okuchi, T.; Seto, Y.; Tanaka, K. A.; Ishikawa, T.; Kodama, R.

    2016-07-01

    We present an indirect method of estimating the strength of a shock wave, allowing on line monitoring of its reproducibility in each laser shot. This method is based on a shot-to-shot measurement of the X-ray emission from the ablated plasma by a high resolution, spatially resolved focusing spectrometer. An optical pump laser with energy of 1.0 J and pulse duration of ˜660 ps was used to irradiate solid targets or foils with various thicknesses containing Oxygen, Aluminum, Iron, and Tantalum. The high sensitivity and resolving power of the X-ray spectrometer allowed spectra to be obtained on each laser shot and to control fluctuations of the spectral intensity emitted by different plasmas with an accuracy of ˜2%, implying an accuracy in the derived electron plasma temperature of 5%-10% in pump-probe high energy density science experiments. At nano- and sub-nanosecond duration of laser pulse with relatively low laser intensities and ratio Z/A ˜ 0.5, the electron temperature follows Te ˜ Ilas2/3. Thus, measurements of the electron plasma temperature allow indirect estimation of the laser flux on the target and control its shot-to-shot fluctuation. Knowing the laser flux intensity and its fluctuation gives us the possibility of monitoring shot-to-shot reproducibility of shock wave strength generation with high accuracy.

  19. Precision and power grip priming by observed grasping.

    PubMed

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-11-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.

  20. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  1. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  2. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  3. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  4. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  5. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  6. Accuracy and versatility of the NIST M48 coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Stoup, John R.; Doiron, Theodore D.

    2001-10-01

    The NIST Is continuing to develop the ability to perform accurate, traceable measurements on a wide range of artifacts using a very precise, error-mapped coordinate measuring machine (CMM). The NIST M48 CMM has promised accuracy and versatility for many ears. Recently, these promises have been realized in a reliable, reproducible way for many types of 1D, 2D, and 3D engineering metrology artifacts. The versatility of the machine has permitted state-of-the-art, accurate measurements of one meter step gages and precision ball plates as well as 500 micrometer holes and small precision parts made of aluminum or glass. To accomplish this wide range of measurements the CMM has required extensive assessment of machine positioning and straightness errors, probe response, machine motion control and speed, environmental stability, and measurement procedures. The CMM has been used as an absolute instrument and as a very complicated comparator. The data collection techniques have been designed to acquire statistical information on the machine and probe performance and to evaluate and remove any potential thermal drift in the machine coordinate system during operation. This paper will present the data collection and measurement techniques used by NIST to achieve excellent measurement results for gage blocks, long end standards, step gages, ring and plug gages, small holes, ball plates, and angular artifacts. Comparison data with existing independent primary measuring instruments will also be presented to show agreement and correlation with those historical methods. Current plans for incorporating the CMM into existing measurement services, such as plain ring gages, large plug gages, and long end standards, will be presented along with other proposed development of this CMM.

  7. Accuracy of laser beam center and width calculations.

    PubMed

    Mana, G; Massa, E; Rovera, A

    2001-03-20

    The application of lasers in high-precision measurements and the demand for accuracy make the plane-wave model of laser beams unsatisfactory. Measurements of the variance of the transverse components of the photon impulse are essential for wavelength determination. Accuracy evaluation of the relevant calculations is thus an integral part of the assessment of the wavelength of stabilized-laser radiation. We present a propagation-of-error analysis on variance calculations when digitized intensity profiles are obtained by means of silicon video cameras. Image clipping criteria are obtained that maximize the accuracy of the computed result.

  8. Development of a precision large deployable antenna

    NASA Astrophysics Data System (ADS)

    Iwata, Yoji; Yamamoto, Kazuo; Noda, Takahiko; Tamai, Yasuo; Ebisui, Takashi; Miura, Koryo; Takano, Tadashi

    This paper describes the results of a study of a precision large deployable antenna for the space VLBI satellite 'MUSES-B'. An antenna with high gain and pointing accuracy is required for the mission objective. The frequency bands required are 22, 5 and 1.6 GHz. The required aperture diameter of the reflector is 10 meters. A displaced axis Cassegrain antenna is adopted with a mesh reflector formed in a tension truss concept. Analysis shows the possibility to achieve aperture efficiency of 60 percent at 22.15 GHz and surface accuracy of 0.5 mm rms. A one-fourth scale model of the reflector has been assembled in order to verify the design and clarify problems in manufacturing and assembly processes.

  9. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  10. Accuracy of grading of urothelial carcinoma on urine cytology: an analysis of interobserver and intraobserver agreement

    PubMed Central

    Reid, Michelle D; Osunkoya, Adeboye O; Siddiqui, Momin T; Looney, Stephen W

    2012-01-01

    Background: Urine samples of known urothelial carcinoma were independently graded by 3 pathologists with (MS, MR) and without (AO) fellowship training in cytopathology using a modified version of the 2004 2-tiered World Health Organization classification system. By measuring interobserver and intraobserver agreement among pathologists, compared with the gold standard of biopsy/resection, specimen accuracy and reproducibility of grading in urine was determined. Methods: 44 urine cytology samples were graded as low or high-grade by 3 pathologists with a 2-3 week interval between grading. Pathologists were blinded to their and others’ grades and histologic diagnoses. Coefficient kappa was used to measure interobserver and intraobserver agreement among pathologists. Accuracy was measured by percentage agreement with the biopsy/resection separately for each pathologist, and for all pathologists and occasions combined. Results: The overall accuracy was 77% (95% C.I., 72% - 82%). Pathologist AO was significantly more accurate than MR on occasion 1 (p = 0.006) and 2 (p = 0.039). No other significant differences were found among the observers. Interobserver agreement using coefficient kappa was unacceptably low, with all but one of the kappa value being less than 0.40, the cutoff for a “fair” degree of agreement. Intraobserver agreement, as measured by coefficient kappa, was adequate. Conclusions: Our study underscores the lack of precision and subjective nature of grading urothelial carcinoma on urine samples. There was poor inter- and intraobserver agreement among pathologists despite fellowship training in cytopathology. Clinicians and cytopathologists should be mindful of this pitfall and avoid grading urothelial carcinoma on urine samples, especially since grading may impact patient management. PMID:23119105

  11. Visual inspection reliability for precision manufactured parts

    SciTech Connect

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  12. Precision Electroforming For Optical Disk Manufacturing

    NASA Astrophysics Data System (ADS)

    Rodia, Carl M.

    1985-04-01

    Precision electroforming in replication of optical discs is discussed with overview of electro-forming technology capabilities, limitations, and tolerance criteria. Use of expendable and reusable mandrels is treated along with techniques for resist master preparation and processing. A review of applications and common reasons for success and failure is offered. Problems such as tensile/compressive stress, roughness and flatness are discussed. Advice is given on approaches, classic and novel, for remedying and avoiding specific problems. An abridged process description of optical memory disk mold electroforming is presented from resist master through metallization and electroforming. Emphasis is placed on methods of achieving accuracy and quality assurance.

  13. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  14. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  15. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  16. An Open Science and Reproducible Research Primer for Landscape Ecologists

    EPA Science Inventory

    In recent years many funding agencies, some publishers, and even the United States government have enacted policies that encourage open science and strive for reproducibility; however, the knowledge and skills to implement open science and enable reproducible research are not yet...

  17. 46 CFR 56.30-3 - Piping joints (reproduces 110).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 2 2012-10-01 2012-10-01 false Piping joints (reproduces 110). 56.30-3 Section 56.30-3... APPURTENANCES Selection and Limitations of Piping Joints § 56.30-3 Piping joints (reproduces 110). The type of piping joint used shall be suitable for the design conditions and shall be selected with consideration...

  18. 46 CFR 56.30-3 - Piping joints (reproduces 110).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 2 2011-10-01 2011-10-01 false Piping joints (reproduces 110). 56.30-3 Section 56.30-3... APPURTENANCES Selection and Limitations of Piping Joints § 56.30-3 Piping joints (reproduces 110). The type of piping joint used shall be suitable for the design conditions and shall be selected with consideration...

  19. 46 CFR 56.30-3 - Piping joints (reproduces 110).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 2 2014-10-01 2014-10-01 false Piping joints (reproduces 110). 56.30-3 Section 56.30-3... APPURTENANCES Selection and Limitations of Piping Joints § 56.30-3 Piping joints (reproduces 110). The type of piping joint used shall be suitable for the design conditions and shall be selected with consideration...

  20. 46 CFR 56.30-3 - Piping joints (reproduces 110).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 2 2013-10-01 2013-10-01 false Piping joints (reproduces 110). 56.30-3 Section 56.30-3... APPURTENANCES Selection and Limitations of Piping Joints § 56.30-3 Piping joints (reproduces 110). The type of piping joint used shall be suitable for the design conditions and shall be selected with consideration...

  1. 10 CFR 95.43 - Authority to reproduce.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Authority to reproduce. 95.43 Section 95.43 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY CLEARANCE AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION AND RESTRICTED DATA Control of Information § 95.43 Authority to reproduce. (a) Each...

  2. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  3. Precision laser cutting

    SciTech Connect

    Kautz, D.D.; Anglin, C.D.; Ramos, T.J.

    1990-01-19

    Many materials that are otherwise difficult to fabricate can be cut precisely with lasers. This presentation discusses the advantages and limitations of laser cutting for refractory metals, ceramics, and composites. Cutting in these materials was performed with a 400-W, pulsed Nd:YAG laser. Important cutting parameters such as beam power, pulse waveforms, cutting gases, travel speed, and laser coupling are outlined. The effects of process parameters on cut quality are evaluated. Three variables are used to determine the cut quality: kerf width, slag adherence, and metallurgical characteristics of recast layers and heat-affected zones around the cuts. Results indicate that ductile materials with good coupling characteristics (such as stainless steel alloys and tantalum) cut well. Materials lacking one or both of these properties (such as tungsten and ceramics) are difficult to cut without proper part design, stress relief, or coupling aids. 3 refs., 2 figs., 1 tab.

  4. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  5. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  6. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  7. An open investigation of the reproducibility of cancer biology research.

    PubMed

    Errington, Timothy M; Iorns, Elizabeth; Gunn, William; Tan, Fraser Elisabeth; Lomax, Joelle; Nosek, Brian A

    2014-12-10

    It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility.

  8. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  9. Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli

    PubMed Central

    Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy

    2016-01-01

    We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966

  10. Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli.

    PubMed

    Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy

    2016-01-01

    We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966

  11. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    DOE PAGES

    Bates, C. R.; Pies, C.; Kempf, S.; Hengstler, D.; Fleischmann, A.; Gastaldo, L.; Enss, C.; Friedrich, S.

    2016-07-15

    Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  12. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    NASA Astrophysics Data System (ADS)

    Bates, C. R.; Pies, C.; Kempf, S.; Hengstler, D.; Fleischmann, A.; Gastaldo, L.; Enss, C.; Friedrich, S.

    2016-07-01

    We describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  13. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  14. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  15. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  16. High accuracy wavelength calibration for a scanning visible spectrometer

    SciTech Connect

    Scotti, Filippo; Bell, Ronald E.

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  17. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  18. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  19. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  20. A Precision Variable, Double Prism Attenuator for CO(2) Lasers.

    PubMed

    Oseki, T; Saito, S

    1971-01-01

    A precision, double prism attenuator for CO(2) lasers, calibrated by its gap capacitance, was constructed to evaluate its possible use as a standard for attenuation measurements. It was found that the accuracy was about 0.1 dB with a dynamic range of about 40 dB.

  1. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  2. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  3. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  4. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  5. Environment Assisted Precision Magnetometry

    NASA Astrophysics Data System (ADS)

    Cappellaro, P.; Goldstein, G.; Maze, J. R.; Jiang, L.; Hodges, J. S.; Sorensen, A. S.; Lukin, M. D.

    2010-03-01

    We describe a method to enhance the sensitivity of magnetometry and achieve nearly Heisenberg-limited precision measurement using a novel class of entangled states. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas strongly coupled to the sensor qubit, it does not depend on the exact values of the couplings (allowing to use disordered systems), and is resilient to decoherence. As a specific example we consider electronic spins in the solid-state, where the ancillary system is associated with the surrounding spin bath. The conventional approach has been to consider these spins only as a source of decoherence and to adopt decoupling scheme to mitigate their effects. Here we describe novel control techniques that transform the environment spins into a resource used to amplify the sensor spin response to weak external perturbations, while maintaining the beneficial effects of dynamical decoupling sequences. We discuss specific applications to improve magnetic sensing with diamond nano-crystals, using one Nitrogen-Vacancy center spin coupled to Nitrogen electronic spins.

  6. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  7. Reproducible and controllable induction voltage adder for scaled beam experiments

    NASA Astrophysics Data System (ADS)

    Sakai, Yasuo; Nakajima, Mitsuo; Horioka, Kazuhiko

    2016-08-01

    A reproducible and controllable induction adder was developed using solid-state switching devices and Finemet cores for scaled beam compression experiments. A gate controlled MOSFET circuit was developed for the controllable voltage driver. The MOSFET circuit drove the induction adder at low magnetization levels of the cores which enabled us to form reproducible modulation voltages with jitter less than 0.3 ns. Preliminary beam compression experiments indicated that the induction adder can improve the reproducibility of modulation voltages and advance the beam physics experiments.

  8. Reproducible and controllable induction voltage adder for scaled beam experiments.

    PubMed

    Sakai, Yasuo; Nakajima, Mitsuo; Horioka, Kazuhiko

    2016-08-01

    A reproducible and controllable induction adder was developed using solid-state switching devices and Finemet cores for scaled beam compression experiments. A gate controlled MOSFET circuit was developed for the controllable voltage driver. The MOSFET circuit drove the induction adder at low magnetization levels of the cores which enabled us to form reproducible modulation voltages with jitter less than 0.3 ns. Preliminary beam compression experiments indicated that the induction adder can improve the reproducibility of modulation voltages and advance the beam physics experiments. PMID:27587112

  9. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  10. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  11. Precision positioning of earth orbiting remote sensing systems

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.; Yunck, T. P.; Wu, S. C.

    1987-01-01

    Decimeter tracking accuracy is sought for a number of precise earth sensing satellites to be flown in the 1990's. This accuracy can be achieved with techniques which use the Global Positioning System (GPS) in a differential mode. A precisely located global network of GPS ground receivers and a receiver aboard the user satellite are needed, and all techniques simultaneously estimate the user and GPS satellite states. Three basic navigation approaches include classical dynamic, wholly nondynamic, and reduced dynamic or hybrid formulations. The first two are simply special cases of the third, which promises to deliver subdecimeter accuracy for dynamically unpredictable vehicles down to the lowest orbit altitudes. The potential of these techniques for tracking and gravity field recovery will be demonstrated on NASA's Topex satellite beginning in 1991. Applications to the Shuttle, Space Station, and dedicated remote sensing platforms are being pursued.

  12. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  13. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  14. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  15. Setup Reproducibility for Thoracic and Upper Gastrointestinal Radiation Therapy: Influence of Immobilization Method and On-Line Cone-Beam CT Guidance

    SciTech Connect

    Li, Winnie; Moseley, Douglas J.; Bissonnette, Jean-Pierre; Purdie, Thomas G.; Bezjak, Andrea; Jaffray, David A.

    2010-01-01

    We report the setup reproducibility of thoracic and upper gastrointestinal (UGI) radiotherapy (RT) patients for 2 immobilization methods evaluated through cone-beam computed tomography (CBCT) image guidance, and present planning target volume (PTV) margin calculations made on the basis of these observations. Daily CBCT images from 65 patients immobilized in a chestboard (CB) or evacuated cushion (EC) were registered to the planning CT using automatic bony anatomy registration. The standardized region-of-interest for matching was focused around vertebral bodies adjacent to tumor location. Discrepancies >3 mm between the CBCT and CT datasets were corrected before initiation of RT and verified with a second CBCT to assess residual error (usually taken after 90 s of the initial CBCT). Positional data were analyzed to evaluate the magnitude and frequencies of setup errors before and after correction. The setup distributions were slightly different for the CB (797 scans) and EC (757 scans) methods, and the probability of adjustment at a 3-mm action threshold was not significantly different (p = 0.47). Setup displacements >10 mm in any direction were observed in 10% of CB fractions and 16% of EC fractions (p = 0.0008). Residual error distributions after CBCT guidance were equivalent regardless of immobilization method. Using a published formula, the PTV margins for the CB were L/R, 3.3 mm; S/I, 3.5 mm; and A/P, 4.6 mm), and for EC they were L/R, 3.7 mm; S/I, 3.3 mm; and A/P, 4.6 mm. In the absence of image guidance, the CB slightly outperformed the EC in precision. CBCT allows reduction to a single immobilization system that can be chosen for efficiency, logistics, and cost. Image guidance allows for increased geometric precision and accuracy and supports a corresponding reduction in PTV margin.

  16. Repeatability and Reproducibility of Compression Strength Measurements Conducted According to ASTM E9

    NASA Technical Reports Server (NTRS)

    Luecke, William E.; Ma, Li; Graham, Stephen M.; Adler, Matthew A.

    2010-01-01

    Ten commercial laboratories participated in an interlaboratory study to establish the repeatability and reproducibility of compression strength tests conducted according to ASTM International Standard Test Method E9. The test employed a cylindrical aluminum AA2024-T351 test specimen. Participants measured elastic modulus and 0.2 % offset yield strength, YS(0.2 % offset), using an extensometer attached to the specimen. The repeatability and reproducibility of the yield strength measurement, expressed as coefficient of variations were cv(sub r)= 0.011 and cv(sub R)= 0.020 The reproducibility of the test across the laboratories was among the best that has been reported for uniaxial tests. The reported data indicated that using diametrically opposed extensometers, instead of a single extensometer doubled the precision of the test method. Laboratories that did not lubricate the ends of the specimen measured yield stresses and elastic moduli that were smaller than those measured in laboratories that lubricated the specimen ends. A finite element analysis of the test specimen deformation for frictionless and perfect friction could not explain the discrepancy, however. The modulus measured from stress-strain data were reanalyzed using a technique that finds the optimal fit range, and applies several quality checks to the data. The error in modulus measurements from stress-strain curves generally increased as the fit range decreased to less than 40 % of the stress range.

  17. Reproducibility of parameters of postocclusive reactive hyperemia measured by diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Vidal-Rosas, Ernesto E.; Billings, Stephen A.; Chico, Timothy; Coca, Daniel

    2016-06-01

    The application of near-infrared spectroscopy (NIRS) to assess microvascular function has shown promising results. An important limitation when using a single source-detector pair, however, is the lack of depth sensitivity. Diffuse optical tomography (DOT) overcomes this limitation using an array of sources and detectors that allow the reconstruction of volumetric hemodynamic changes. This study compares the key parameters of postocclusive reactive hyperemia measured in the forearm using standard NIRS and DOT. We show that while the mean parameter values are similar for the two techniques, DOT achieves much better reproducibility, as measured by the intraclass correlation coefficient (ICC). We show that DOT achieves high reproducibility for muscle oxygen consumption (ICC: 0.99), time to maximal HbO2 (ICC: 0.94), maximal HbO2 (ICC: 0.99), and time to maximal HbT (ICC: 0.99). Absolute reproducibility as measured by the standard error of measurement is consistently smaller and close to zero (ideal value) across all parameters measured by DOT compared to NIRS. We conclude that DOT provides a more robust characterization of the reactive hyperemic response and show how the availability of volumetric hemodynamic changes allows the identification of areas of temporal consistency, which could help characterize more precisely the microvasculature.

  18. Standardization of Hemagglutination Inhibition Assay for Influenza Serology Allows for High Reproducibility between Laboratories.

    PubMed

    Zacour, Mary; Ward, Brian J; Brewer, Angela; Tang, Patrick; Boivin, Guy; Li, Yan; Warhuus, Michelle; McNeil, Shelly A; LeBlanc, Jason J; Hatchette, Todd F

    2016-03-01

    Standardization of the hemagglutination inhibition (HAI) assay for influenza serology is challenging. Poor reproducibility of HAI results from one laboratory to another is widely cited, limiting comparisons between candidate vaccines in different clinical trials and posing challenges for licensing authorities. In this study, we standardized HAI assay materials, methods, and interpretive criteria across five geographically dispersed laboratories of a multidisciplinary influenza research network and then evaluated intralaboratory and interlaboratory variations in HAI titers by repeatedly testing standardized panels of human serum samples. Duplicate precision and reproducibility from comparisons between assays within laboratories were 99.8% (99.2% to 100%) and 98.0% (93.3% to 100%), respectively. The results for 98.9% (95% to 100%) of the samples were within 2-fold of all-laboratory consensus titers, and the results for 94.3% (85% to 100%) of the samples were within 2-fold of our reference laboratory data. Low-titer samples showed the greatest variability in comparisons between assays and between sites. Classification of seroprotection (titer ≥ 40) was accurate in 93.6% or 89.5% of cases in comparison to the consensus or reference laboratory classification, respectively. This study showed that with carefully chosen standardization processes, high reproducibility of HAI results between laboratories is indeed achievable. PMID:26818953

  19. Reproducibility of ERG responses obtained with the DTL electrode.

    PubMed

    Hébert, M; Vaegan; Lachapelle, P

    1999-03-01

    Previous investigators have suggested that the DTL fibre electrode might not be suitable for the recording of replicable electroretinograms. We present experimental evidence that when used adequately, this electrode does permit the recording of highly reproducible retinal potentials.

  20. 15. REPRODUCED FROM 'GRIST WIND MILLS AT EAST HAMPTON,' PICTURESQUE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. REPRODUCED FROM 'GRIST WIND MILLS AT EAST HAMPTON,' PICTURESQUE AMERICA NEW YORK, 1872. THE HOOD WINDMILL IS IN THE FOREGROUND AND THE PANTIGO WINDMILL IS IN THE BACKGROUND - Pantigo Windmill, James Lane, East Hampton, Suffolk County, NY