Science.gov

Sample records for accuracy precision reproducibility

  1. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  2. Accuracy, Precision, and Reproducibility of Four T1 Mapping Sequences: A Head-to-Head Comparison of MOLLI, ShMOLLI, SASHA, and SAPPHIRE

    PubMed Central

    Roujol, Sébastien; Weingärtner, Sebastian; Foppa, Murilo; Chow, Kelvin; Kawaji, Keigo; Ngo, Long H.; Kellman, Peter; Manning, Warren J.; Thompson, Richard B.

    2014-01-01

    Purpose To compare accuracy, precision, and reproducibility of four commonly used myocardial T1 mapping sequences: modified Look-Locker inversion recovery (MOLLI), shortened MOLLI (ShMOLLI), saturation recovery single-shot acquisition (SASHA), and saturation pulse prepared heart rate independent inversion recovery (SAPPHIRE). Materials and Methods This HIPAA-compliant study was approved by the institutional review board. All subjects provided written informed consent. Accuracy, precision, and reproducibility of the four T1 mapping sequences were first compared in phantom experiments. In vivo analysis was performed in seven healthy subjects (mean age ± standard deviation, 38 years ± 19; four men, three women) who were imaged twice on two separate days. In vivo reproducibility of native T1 mapping and extracellular volume (ECV) were measured. Differences between the sequences were assessed by using Kruskal-Wallis and Wilcoxon rank sum tests (phantom data) and mixed-effect models (in vivo data). Results T1 mapping accuracy in phantoms was lower with ShMOLLI (62 msec) and MOLLI (44 msec) than with SASHA (13 msec; P < .05) and SAPPHIRE (12 msec; P < .05). MOLLI had similar precision to ShMOLLI (4.0 msec vs 5.6 msec; P = .07) but higher precision than SAPPHIRE (6.8 msec; P = .002) and SASHA (8.7 msec; P < .001). All sequences had similar reproducibility in phantoms (P = .1). The four sequences had similar in vivo reproducibility for native T1 mapping (∼25–50 msec; P > .05) and ECV quantification (∼0.01–0.02; P > .05). Conclusion SASHA and SAPPHIRE yield higher accuracy, lower precision, and similar reproducibility compared with MOLLI and ShMOLLI for T1 measurement. Different sequences yield different ECV values; however, all sequences have similar reproducibility for ECV quantification. © RSNA, 2014 Online supplemental material is available for this article. PMID:24702727

  3. Accuracy and reproducibility of cholesterol assay in the western Cape.

    PubMed

    Berger, G M; Christopher, K; Juritz, J M; Liesegang, F

    1988-11-19

    The accuracy and precision of cholesterol assay in the western Cape region is reported. The survey was carried out over 15 weeks utilising three human EDTA plasma pools with normal, borderline high and high cholesterol levels respectively. All 11 laboratories in the region providing a service to academic, provincial or military hospitals or to the private medical sector were included in the study. Ten of the 11 laboratories utilised automated enzymatic methods of cholesterol assay whereas 1 used a manual procedure based on the Liebermann-Burchard reaction. Methods were standardised by means of a variety of commercial calibrator material in all except 1 laboratory which used reference sera from the Centers for Disease Control, Atlanta. The performance of the 4 best laboratories met the standard of precision recommended for cholesterol assay, viz. total coefficient of variation of less than or equal to 2.5%. However, only 2 of the 11 laboratories achieved the optimum objective of an overall bias of less than 2.0% together with precision of less than or equal to 2.5%. Rational use of cholesterol assay for diagnosis and management will therefore require standardisation of cholesterol assay on a common reference material and greater attention to analytical factors influencing the reproducibility of results. Intrinsic biological variation also contributes uncertainty to the interpretation of a single value. Thus important clinical decisions must be based on two or more assays carried out using appropriate methodology.

  4. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  5. Precision cosmology, Accuracy cosmology and Statistical cosmology

    NASA Astrophysics Data System (ADS)

    Verde, Licia

    2014-05-01

    The avalanche of data over the past 10-20 years has propelled cosmology into the ``precision era''. The next challenge cosmology has to meet is to enter the era of accuracy. Because of the intrinsic nature of studying the Cosmos and the sheer amount of data available now and coming soon, the only way to meet this challenge is by developing suitable and specific statistical techniques. The road from precision Cosmology to accurate Cosmology goes through statistical Cosmology. I will outline some open challenges and discuss some specific examples.

  6. Ultra-wideband ranging precision and accuracy

    NASA Astrophysics Data System (ADS)

    MacGougan, Glenn; O'Keefe, Kyle; Klukas, Richard

    2009-09-01

    This paper provides an overview of ultra-wideband (UWB) in the context of ranging applications and assesses the precision and accuracy of UWB ranging from both a theoretical perspective and a practical perspective using real data. The paper begins with a brief history of UWB technology and the most current definition of what constitutes an UWB signal. The potential precision of UWB ranging is assessed using Cramer-Rao lower bound analysis. UWB ranging methods are described and potential error sources are discussed. Two types of commercially available UWB ranging radios are introduced which are used in testing. Actual ranging accuracy is assessed from line-of-sight testing under benign signal conditions by comparison to high-accuracy electronic distance measurements and to ranges derived from GPS real-time kinematic positioning. Range measurements obtained in outdoor testing with line-of-sight obstructions and strong reflection sources are compared to ranges derived from classically surveyed positions. The paper concludes with a discussion of the potential applications for UWB ranging.

  7. Characterizing geometric accuracy and precision in image guided gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tenn, Stephen Edward

    Gated radiotherapy combined with intensity modulated or three-dimensional conformal radiotherapy for tumors in the thorax and abdomen can deliver dose distributions which conform closely to tumor shapes allowing increased tumor dose while sparing healthy tissues. These conformal fields require more accurate and precise placement than traditional fields or tumors may receive suboptimal dose thereby reducing tumor control probability. Image guidance based on four-dimensional computed tomography (4DCT) provides a means to improve accuracy and precision in radiotherapy. The ability of 4DCT to accurately reproduce patient geometry and the ability of image guided gating equipment to position tumors and place fields around them must be characterized in order to determine treatment parameters such as tumor margins. Fiducial based methods of characterizing accuracy and precision of equipment for 4DCT planning and image guided gated radiotherapy (IGGRT) are presented with results for specific equipment. Fiducial markers of known geometric orientation are used to characterize 4DCT image reconstruction accuracy. Accuracy is determined under different acquisition protocols, reconstruction phases, and phantom trajectories. Targeting accuracy of fiducial based image guided gating is assessed by measuring in-phantom field positions for different motions, gating levels and target rotations. Synchronization parameters for gating equipment are also determined. Finally, end-to-end testing is performed to assess overall accuracy and precision of the equipment under controlled conditions. 4DCT limits fiducial geometric distance errors to 2 mm for repeatable target trajectories and to 5 mm for a pseudo-random trajectory. Largest offsets were in the longitudinal direction. If correctly calibrated and synchronized, the IGGRT system tested here can target reproducibly moving tumors with accuracy better than 1.2 mm. Gating level can affect accuracy if target motion is asymmetric about the

  8. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  9. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  10. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  11. Establishing precision and accuracy in PDV results

    SciTech Connect

    Briggs, Matthew E.; Howard, Marylesa; Diaz, Abel

    2016-04-19

    We need to know uncertainties and systematic errors because we create and compare against archival weapons data, we constrain the models, and we provide scientific results. Good estimates of precision from the data record are available and should be incorporated into existing results; reanalysis of valuable data is suggested. Estimates of systematic errors are largely absent. The original work by Jensen et al. using gun shots for window corrections, and the integrated velocity comparison with X-rays by Schultz are two examples where any systematic errors appear to be <1% level.

  12. Examination of the Position Accuracy of Implant Abutments Reproduced by Intra-Oral Optical Impression

    PubMed Central

    Odaira, Chikayuki; Kobayashi, Takuya; Kondo, Hisatomo

    2016-01-01

    An impression technique called optical impression using intraoral scanner has attracted attention in digital dentistry. This study aimed to evaluate the accuracy of the optical impression, comparing a virtual model reproduced by an intraoral scanner to a working cast made by conventional silicone impression technique. Two implants were placed on a master model. Working casts made of plaster were fabricated from the master model by silicone impression. The distance between the ball abutments and the angulation between the healing abutments of 5 mm and 7 mm height at master model were measured using Computer Numerical Control Coordinate Measuring Machine (CNCCMM) as control. Working casts were then measured using CNCCMM, and virtual models via stereo lithography data of master model were measured by a three-dimensional analyzing software. The distance between ball abutments of the master model was 9634.9 ± 1.2 μm. The mean values of trueness of the Lava COS and working casts were 64.5 μm and 22.5 μm, respectively, greater than that of control. The mean of precision values of the Lava COS and working casts were 15.6 μm and 13.5 μm, respectively. In the case of a 5-mm-height healing abutment, mean angulation error of the Lava COS was greater than that of the working cast, resulting in significant differences in trueness and precision. However, in the case of a 7-mm-height abutment, mean angulation errors of the Lava COS and the working cast were not significantly different in trueness and precision. Therefore, distance errors of the optical impression were slightly greater than those of conventional impression. Moreover, the trueness and precision of angulation error could be improved in the optical impression using longer healing abutments. In the near future, the development of information technology could enable improvement in the accuracy of the optical impression with intraoral scanners. PMID:27706225

  13. Precision and Accuracy of Topography Measurements on Europa

    NASA Astrophysics Data System (ADS)

    Greenberg, R.; Hurford, T. A.; Foley, M. A.; Varland, K.

    2007-03-01

    Reports of the death of the melt-through model for chaotic terrain on Europa have been greatly exaggerated, to paraphrase Mark Twain. They are based on topographic maps of insufficient quantitative accuracy and precision.

  14. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  15. On precision and accuracy (bias) statements for measurement procedures

    SciTech Connect

    Bruckner, L.A.; Hume, M.W.; Delvin, W.L.

    1988-01-01

    Measurement procedures are often required to contain precision and accuracy of precision and bias statements. This paper contains a glossary that explains various terms that often appear in these statements as well as an example illustrating such statements for a specific set of data. Precision and bias statements are shown to vary according to the conditions under which the data were collected. This paper emphasizes that the error model (an algebraic expression that describes how the various sources of variation affect the measurement) is an important consideration in the formation of precision and bias statements.

  16. Accuracy and precision of temporal artery thermometers in febrile patients.

    PubMed

    Wolfson, Margaret; Granstrom, Patsy; Pomarico, Bernie; Reimanis, Cathryn

    2013-01-01

    The noninvasive temporal artery thermometer offers a way to measure temperature when oral assessment is contraindicated, uncomfortable, or difficult to obtain. In this study, the accuracy and precision of the temporal artery thermometer exceeded levels recommended by experts for use in acute care clinical practice.

  17. Accuracy, precision, and lower detection limits (a deficit reduction approach)

    SciTech Connect

    Bishop, C.T.

    1993-10-12

    The evaluation of the accuracy, precision and lower detection limits of the determination of trace radionuclides in environmental samples can become quite sophisticated and time consuming. This in turn could add significant cost to the analyses being performed. In the present method, a {open_quotes}deficit reduction approach{close_quotes} has been taken to keep costs low, but at the same time provide defensible data. In order to measure the accuracy of a particular method, reference samples are measured over the time period that the actual samples are being analyzed. Using a Lotus spreadsheet, data are compiled and an average accuracy is computed. If pairs of reference samples are analyzed, then precision can also be evaluated from the duplicate data sets. The standard deviation can be calculated if the reference concentrations of the duplicates are all in the same general range. Laboratory blanks are used to estimate the lower detection limits. The lower detection limit is calculated as 4.65 times the standard deviation of a set of blank determinations made over a given period of time. A Lotus spreadsheet is again used to compile data and LDLs over different periods of time can be compared.

  18. Assessing the accuracy and reproducibility of modality independent elastography in a murine model of breast cancer

    PubMed Central

    Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.

    2015-01-01

    Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120

  19. A newly developed peripheral anterior chamber depth analysis system: principle, accuracy, and reproducibility

    PubMed Central

    Kashiwagi, K; Kashiwagi, F; Toda, Y; Osada, K; Tsumura, T; Tsukahara, S

    2004-01-01

    Aim: To develop a new, non-contact system for measuring anterior chamber depth (ACD) quantitatively, and to investigate its accuracy as well as interobserver and intraobserver reproducibility. Methods: The system scanned the ACD from the optical axis to the limbus in approximately 0.5 second and took 21 consecutive slit lamp images at 0.4 mm intervals. A computer installed program automatically evaluated the ACD, central corneal thickness (CT), and corneal radius of curvature (CRC) instantly. A dummy eye was used for investigating measurement accuracy. The effects of CT and CRC on the measurement results were examined using a computer simulation model to minimise measurement errors. Three examiners measured the ACD in 10 normal eyes, and interobserver and intraobserver reproducibility was analysed. Results: The ACD values measured by this system were very similar to theoretical values. Increase of CRC and decrease in CT decreased ACD and vice versa. Data calibration using evaluated CT and CRC successfully reduced measurement errors. Intraobserver and interobserver variations were small. Their coefficient variation values were 7.4% (SD 2.3%) and 6.7% (0.7%), and these values tended to increase along the distance from the optical axis. Conclusion: The current system can measure ACD with high accuracy as well as high intraobserver and interobserver reproducibility. It has potential use in measuring ACD quantitatively and screening subjects with narrow angle. PMID:15258020

  20. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  1. Calibration, linearity, precision, and accuracy of a PIXE system

    NASA Astrophysics Data System (ADS)

    Richter, F.-W.; Wätjen, U.

    1984-04-01

    An accuracy and precision of better than 10% each can be achieved with PIXE analysis, with both thin and thick samples. Measures we took to obtain these values for routine analyses in the Marburg PIXE system are discussed. The advantages of an experimental calibration procedure, using thin evaporated standard foils, over the "absolute" method of employing X-ray production cross sections are outlined. The importance of X-ray line intensity ratios, even of weak transitions, for the accurate analysis of interfering elements of low mass content is demonstrated for the Se K α-Pb L ηline overlap. Matrix effects including secondary excitation can be corrected for very well without degrading accuracy under certain conditions.

  2. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  3. Reproducibility and accuracy of optic nerve sheath diameter assessment using ultrasound compared to magnetic resonance imaging

    PubMed Central

    2013-01-01

    Background Quantification of the optic nerve sheath diameter (ONSD) by transbulbar sonography is a promising non-invasive technique for the detection of altered intracranial pressure. In order to establish this method as follow-up tool in diseases with intracranial hyper- or hypotension scan-rescan reproducibility and accuracy need to be systematically investigated. Methods The right ONSD of 15 healthy volunteers (mean age 24.5 ± 0.8 years) were measured by both transbulbar sonography (9 – 3 MHz) and 3 Tesla MRI (half-Fourier acquisition single-shot turbo spin-echo sequences, HASTE) 3 and 5 mm behind papilla. All volunteers underwent repeated ultrasound and MRI examinations in order to assess scan-rescan reproducibility and accuracy. Moreover, inter- and intra-observer variabilities were calculated for both techniques. Results Scan-rescan reproducibility was robust for ONSD quantification by sonography and MRI at both depths (r > 0.75, p ≤ 0.001, mean differences < 2%). Comparing ultrasound- and MRI-derived ONSD values, we found acceptable agreement between both methods for measurements at a depth of 3 mm (r = 0.72, p = 0.002, mean difference < 5%). Further analyses revealed good inter- and intra-observer reliability for sonographic measurements 3 mm behind the papilla and for MRI at 3 and 5 mm (r > 0.82, p < 0.001, mean differences < 5%). Conclusions Sonographic ONSD quantification 3 mm behind the papilla can be performed with good reproducibility, measurement accuracy and observer agreement. Thus, our findings emphasize the feasibility of this technique as a non-invasive bedside tool for longitudinal ONSD measurements. PMID:24289136

  4. Precise interferometric length and phase-change measurement of gauge blocks based on reproducible wringing.

    PubMed

    Titov, A; Malinovsky, I; Belaïdi, H; França, R S; Massone, C A

    2000-02-01

    A modern fringe-pattern-analyzing interferometer with a resolution of 1 x 10(-9) and without exclusion of systematic uncertainties owing to optic effects of less than 1 nm was used to test a new method of interferometric length measurement based on a combination of the reproducible wringing and slave-block techniques. Measurements without excessive wringing film error are demonstrated for blocks with nominal lengths of 2-6 mm and with high surface flatness. The uncertainty achieved for these blocks is less than 1 nm. Deformations of steel gauge blocks and reference platens, caused by wringing forces, are investigated, and the necessary conditions for reproducible wringing are outlined. A subnanometer uncertainty level in phase-change-correction measurements has been achieved for gauge blocks as long as 100 mm. Limitations on the accuracy standard method of interferometric length measurements and shortcomings of the present definition of the length of the material artifact are emphasized.

  5. On the accuracy and reproducibility of fiber optic (FO) and infrared (IR) temperature measurements of solid materials in microwave applications

    NASA Astrophysics Data System (ADS)

    Durka, Tomasz; Stefanidis, Georgios D.; Van Gerven, Tom; Stankiewicz, Andrzej

    2010-04-01

    The accuracy and reproducibility of temperature measurements in solid materials under microwave heating are investigated in this work using two of the most celebrated temperature measurement techniques, namely fiber optic probes (FO) and infrared (IR) sensors. Two solid materials with a wide range of applications in heterogeneous catalysis and different microwave absorbing capabilities are examined: CeO2-ZrO2 and Al2O3 particles. We investigate a number of effects ranging from purely technical issues, such as the use of a glass probe guide, over process operation parameters, such as the kind and the volume of the heated sample, to measurement related issues, such as the exact location of the probe in the sample. In this frame, the FO and IR methods are benchmarked. It was found that when using bare FO probes, not only is their lifetime reduced but also the reproducibility of the results is compromised. Using a glass probe guide greatly assists in precise location of the probe in the sample resulting in more reproducible temperature measurements. The FO reproducibility, though, decreases with increasing temperature. Besides, contrary to conventional heating, the sample temperature decreases with decreasing sample mass (and volume) at constant irradiation power level, confirming the volumetric nature of microwave heating. Furthermore, a strongly non-uniform temperature field is developed in the reactor despite the use of a monomode cavity and small amounts of samples. These temperature variations depending on the volume and position can only by detected by FO. In contrast, IR, which actually measures temperature at the exterior of the reactor wall, remains nearly insensitive to them and consistently underestimates the real temperature in the reactor. The modeler and the experimentalist should be rather circumspect in accepting the IR output as a representative reactor temperature.

  6. A comprehensive investigation of the accuracy and reproducibility of a multitarget single isocenter VMAT radiosurgery technique

    PubMed Central

    Thomas, Andrew; Niebanck, Michael; Juang, Titania; Wang, Zhiheng; Oldham, Mark

    2013-01-01

    treatment plan, demonstrating high accuracy and reproducibility of both the treatment machine and the IGRT procedure. The complexity of the treatment (multiple arcs) and dosimetry (multiple strong gradients) pose a substantial challenge for comprehensive verification. 3D dosimetry can be uniquely effective in this scenario. PMID:24320511

  7. A comprehensive investigation of the accuracy and reproducibility of a multitarget single isocenter VMAT radiosurgery technique

    SciTech Connect

    Thomas, Andrew; Niebanck, Michael; Juang, Titania; Wang, Zhiheng; Oldham, Mark

    2013-12-15

    matched the treatment plan, demonstrating high accuracy and reproducibility of both the treatment machine and the IGRT procedure. The complexity of the treatment (multiple arcs) and dosimetry (multiple strong gradients) pose a substantial challenge for comprehensive verification. 3D dosimetry can be uniquely effective in this scenario.

  8. Root ZX Electronic Foramen Locator: An Ex Vivo Study of Its Three Models' Precision and Reproducibility

    PubMed Central

    Reinaldo, Rafael Santos; Frota, Luciana Maria Arcanjo; do Vale, Mônica Sampaio

    2017-01-01

    Although Root ZX is considered the gold standard electronic foramen locator (EFL), two variations of this device were launched, however without different operating mechanisms. This investigation aims to evaluate the precision of Root ZX (RZX), Root ZX II (RII), and Root ZX Mini (RM) EFLs. After access cavity preparation, 32 mandibular single rooted human premolars had their real length measured with the aid of a #15 K-type manual file under magnification (25x). Electronic measurements were performed by the devices in an alternate order until the apical foramen was reached (0.0). Each measurement was performed with adjusted file to the real length of the teeth and verified with a digital caliper. The accuracy of the EFLs was 68.8% (RZX), 65.8% (RII), and 68.8% (RM), considering ±0.5 mm as a margin of tolerance. The mean errors of the devices were 0.37 ± 0.25 mm (RZX), 0.41 ± 0.34 mm (RII), and 0.32 ± 0.28 mm (RM). ANOVA and Tukey test were applied to analyze the obtained data, which showed that there were no statistically significant differences among the locators (P > .05). It can be concluded that the three tested devices demonstrated precise measurements of the real length of the canal without performance differences among them. PMID:28367215

  9. Root ZX Electronic Foramen Locator: An Ex Vivo Study of Its Three Models' Precision and Reproducibility.

    PubMed

    Aguiar, Bernardo Almeida; Reinaldo, Rafael Santos; Frota, Luciana Maria Arcanjo; do Vale, Mônica Sampaio; de Vasconcelos, Bruno Carvalho

    2017-01-01

    Although Root ZX is considered the gold standard electronic foramen locator (EFL), two variations of this device were launched, however without different operating mechanisms. This investigation aims to evaluate the precision of Root ZX (RZX), Root ZX II (RII), and Root ZX Mini (RM) EFLs. After access cavity preparation, 32 mandibular single rooted human premolars had their real length measured with the aid of a #15 K-type manual file under magnification (25x). Electronic measurements were performed by the devices in an alternate order until the apical foramen was reached (0.0). Each measurement was performed with adjusted file to the real length of the teeth and verified with a digital caliper. The accuracy of the EFLs was 68.8% (RZX), 65.8% (RII), and 68.8% (RM), considering ±0.5 mm as a margin of tolerance. The mean errors of the devices were 0.37 ± 0.25 mm (RZX), 0.41 ± 0.34 mm (RII), and 0.32 ± 0.28 mm (RM). ANOVA and Tukey test were applied to analyze the obtained data, which showed that there were no statistically significant differences among the locators (P > .05). It can be concluded that the three tested devices demonstrated precise measurements of the real length of the canal without performance differences among them.

  10. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  11. Training to Improve Precision and Accuracy in the Measurement of Fiber Morphology

    PubMed Central

    Jeon, Jun; Wade, Mary Beth; Luong, Derek; Palmer, Xavier-Lewis; Bharti, Kapil; Simon, Carl G.

    2016-01-01

    An estimated $7.1 billion dollars a year is spent due to irreproducibility in pre-clinical data from errors in data analysis and reporting. Therefore, developing tools to improve measurement comparability is paramount. Recently, an open source tool, DiameterJ, has been deployed for the automated analysis of scanning electron micrographs of fibrous scaffolds designed for tissue engineering applications. DiameterJ performs hundreds to thousands of scaffold fiber diameter measurements from a single micrograph within a few seconds, along with a variety of other scaffold morphological features, which enables a more rigorous and thorough assessment of scaffold properties. Herein, an online, publicly available training module is introduced for educating DiameterJ users on how to effectively analyze scanning electron micrographs of fibers and the large volume of data that a DiameterJ analysis yields. The end goal of this training was to improve user data analysis and reporting to enhance reproducibility of analysis of nanofiber scaffolds. User performance was assessed before and after training to evaluate the effectiveness of the training modules. Users were asked to use DiameterJ to analyze reference micrographs of fibers that had known diameters. The results showed that training improved the accuracy and precision of measurements of fiber diameter in scanning electron micrographs. Training also improved the precision of measurements of pore area, porosity, intersection density, and characteristic fiber length between fiber intersections. These results demonstrate that the DiameterJ training module improves precision and accuracy in fiber morphology measurements, which will lead to enhanced data comparability. PMID:27907145

  12. Accuracy, reproducibility, and interpretation of fatty acid methyl ester profiles of model bacterial communities

    USGS Publications Warehouse

    Kidd, Haack S.; Garchow, H.; Odelson, D.A.; Forney, L.J.; Klug, M.J.

    1994-01-01

    We determined the accuracy and reproducibility of whole-community fatty acid methyl ester (FAME) analysis with two model bacterial communities differing in composition by using the Microbial ID, Inc. (MIDI), system. The biomass, taxonomic structure, and expected MIDI-FAME profiles under a variety of environmental conditions were known for these model communities a priori. Not all members of each community could be detected in the composite profile because of lack of fatty acid 'signatures' in some isolates or because of variations (approximately fivefold) in fatty acid yield across taxa. MIDI- FAME profiles of replicate subsamples of a given community were similar in terms of fatty acid yield per unit of community dry weight and relative proportions of specific fatty acids. Principal-components analysis (PCA) of MIDI-FAME profiles resulted in a clear separation of the two different communities and a clustering of replicates of each community from two separate experiments on the first PCA axis. The first PCA axis accounted for 57.1% of the variance in the data and was correlated with fatty acids that varied significantly between communities and reflected the underlying community taxonomic structure. On the basis of our data, community fatty acid profiles can be used to assess the relative similarities and differences of microbial communities that differ in taxonomic composition. However, detailed interpretation of community fatty acid profiles in terms of biomass or community taxonomic composition must be viewed with caution until our knowledge of the quantitative and qualitative distribution of fatty acids over a wide variety of taxa and the effects of growth conditions on fatty acid profiles is more extensive.

  13. Mineral element analyses of switchgrass biomass: comparison of the accuracy and precision of laboratories

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mineral concentration of plant biomass can affect its use in thermal conversion to energy. The objective of this study was to compare the precision and accuracy of university and private laboratories that conduct mineral analyses of plant biomass on a fee basis. Accuracy and precision of the laborat...

  14. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  15. Accuracy and Precision of GPS Carrier-Phase Clock Estimates

    DTIC Science & Technology

    2001-01-01

    L‘Geodesy using the Global Positioning System : The eflects of signal scattering o n esti- mates of site positions , ” Journal of Geophysical Research...maia.usno.navy.mil Abstract The accuracy of GPS -based clock estimates is determined by the pseudorange data. For 24-hour arcs of global data sampled...ps) for 1-day integrations. Assuming such positioning results can be realized also as equivalent light-travel times, the po- tential of GPS carrier

  16. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  17. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  18. Analysis of factors affecting the accuracy, reproducibility, and interpretation of microbial community carbon source utilization patterns

    USGS Publications Warehouse

    Haack, S.K.; Garchow, H.; Klug, M.J.; Forney, L.J.

    1995-01-01

    We determined factors that affect responses of bacterial isolates and model bacterial communities to the 95 carbon substrates in Biolog microliter plates. For isolates and communities of three to six bacterial strains, substrate oxidation rates were typically nonlinear and were delayed by dilution of the inoculum. When inoculum density was controlled, patterns of positive and negative responses exhibited by microbial communities to each of the carbon sources were reproducible. Rates and extents of substrate oxidation by the communities were also reproducible but were not simply the sum of those exhibited by community members when tested separately. Replicates of the same model community clustered when analyzed by principal- components analysis (PCA), and model communities with different compositions were clearly separated un the first PCA axis, which accounted for >60% of the dataset variation. PCA discrimination among different model communities depended on the extent to which specific substrates were oxidized. However, the substrates interpreted by PCA to be most significant in distinguishing the communities changed with reading time, reflecting the nonlinearity of substrate oxidation rates. Although whole-community substrate utilization profiles were reproducible signatures for a given community, the extent of oxidation of specific substrates and the numbers or activities of microorganisms using those substrates in a given community were not correlated. Replicate soil samples varied significantly in the rate and extent of oxidation of seven tested substrates, suggesting microscale heterogeneity in composition of the soil microbial community.

  19. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  20. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    NASA Astrophysics Data System (ADS)

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-05-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  1. Bloch-Siegert B1-Mapping Improves Accuracy and Precision of Longitudinal Relaxation Measurements in the Breast at 3 T.

    PubMed

    Whisenant, Jennifer G; Dortch, Richard D; Grissom, William; Kang, Hakmook; Arlinghaus, Lori R; Yankeelov, Thomas E

    2016-12-01

    Variable flip angle (VFA) sequences are a popular method of calculating T1 values, which are required in a quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI). B1 inhomogeneities are substantial in the breast at 3 T, and these errors negatively impact the accuracy of the VFA approach, thus leading to large errors in the DCE-MRI parameters that could limit clinical adoption of the technique. This study evaluated the ability of Bloch-Siegert B1 mapping to improve the accuracy and precision of VFA-derived T1 measurements in the breast. Test-retest MRI sessions were performed on 16 women with no history of breast disease. T1 was calculated using the VFA sequence, and B1 field variations were measured using the Bloch-Siegert methodology. As a gold standard, inversion recovery (IR) measurements of T1 were performed. Fibroglandular tissue and adipose tissue from each breast were segmented using the IR images, and the mean T1 was calculated for each tissue. Accuracy was evaluated by percent error (%err). Reproducibility was assessed via the 95% confidence interval (CI) of the mean difference and repeatability coefficient (r). After B1 correction, %err significantly (P < .001) decreased from 17% to 8.6%, and the 95% CI and r decreased from ±94 to ±38 milliseconds and from 276 to 111 milliseconds, respectively. Similar accuracy and reproducibility results were observed in the adipose tissue of the right breast and in both tissues of the left breast. Our data show that Bloch-Siegert B1 mapping improves accuracy and precision of VFA-derived T1 measurements in the breast.

  2. Factors influencing accuracy and reproducibility of body resistance measurements by foot-to-foot impedancemeters.

    PubMed

    Bousbiat, Sana; Jaffrin, Michel; Assadi, Imen

    2015-01-01

    The electronics of a BodySignal V2 (Tefal, France) foot-to-foot impedancemeter (FFI) was modified to display the foot-to-foot resistance instead of body fat. This device was connected to electrodes of different sizes mounted on a podoscope permitting photographs of subjects feet soles and electrodes in order to calculate the contact area between feet and electrodes. The foot-to-foot resistance was found to decrease when the contact area of feet with current and voltage electrodes increased. It was also sensitive to feet displacement and a backward move of 5 cm increased the mean resistance by 37 Ω. The resistance reproducibility was tested by asking the subject to repeat measurements 10-times by stepping up and down from the podoscope. The mean SD of these tests was 0.88% of mean resistance, but it fell to 0.47% when feet position was guided and to 0.29% with transverse voltage electrodes. For good reproducibility, it is important that voltage electrodes be small and that the scale design facilitates a correct position of heels on these electrodes.

  3. Precision and accuracy of in vivo bone mineral measurement in rats using dual-energy X-ray absorptiometry.

    PubMed

    Rozenberg, S; Vandromme, J; Neve, J; Aguilera, A; Muregancuro, A; Peretz, A; Kinthaert, J; Ham, H

    1995-01-01

    The aim of this study was to evaluate the precision and accuracy of dual-energy X-ray absorptiometry (DXA) for measuring bone mineral content at different sites of the skeleton in rats. In vitro the reproducibility error was very small (< 1%), but in vivo the intra-observer variability ranged from 0.9% to 6.0%. Several factors have been shown to affect in vivo reproducibility: the reproducibility was better when the results were expressed as bone mineral density (BMD) rather than bone mineral content (BMC), intra-observer variability was better than the inter-observer variability, and a higher error was observed for the tibia compared with that for vertebrae and femur. The accuracy of measurement at the femur and tibia was assessed by comparing the values with ash weight and with biochemically determined calcium content. The correlation coefficients (R) between the in vitro BMC and the dry weight or the calcium content were higher than 0.99 for both the femur and the tibia. SEE ranged between 0.0 g (ash weight) and 2.0 mg (Ca content). Using in vitro BMC, ash weight could be estimated with an accuracy error close to 0 and calcium content with an error ranging between 0.82% and 6.80%. The R values obtained between the in vivo and in vitro BMC were 0.98 and 0.97 respectively for femur and tibia, with SEE of 0.04 and 0.02 g respectively. In conclusion, the in vivo precision of the technique was found to be too low. To be of practical use it is important in the design of experimentation to try to reduce the measurement error.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Failure of the Woods-Saxon nuclear potential to simultaneously reproduce precise fusion and elastic scattering measurements

    SciTech Connect

    Mukherjee, A.; Hinde, D. J.; Dasgupta, M.; Newton, J. O.; Butt, R. D.; Hagino, K.

    2007-04-15

    A precise fusion excitation function has been measured for the {sup 12}C+{sup 208}Pb reaction at energies around the barrier, allowing the fusion barrier distribution to be extracted. The fusion cross sections at high energies differ significantly from existing fusion data. Coupled reaction channels calculations have been carried out with the code FRESCO. A bare potential previously claimed to uniquely describe a wide range of {sup 12}C+{sup 208}Pb near-barrier reaction channels failed to reproduce the new fusion data. The nuclear potential diffuseness of 0.95 fm which fits the fusion excitation function over a broad energy range fails to reproduce the elastic scattering. A diffuseness of 0.55 fm reproduces the fusion barrier distribution and elastic scattering data, but significantly overpredicts the fusion cross sections at high energies. This may be due to physical processes not included in the calculations. To constrain calculations, it is desirable to have precisely measured fusion cross sections, especially at energies around the barrier.

  5. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    PubMed

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates.

  6. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  7. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  8. High Interlaboratory Reproducibility and Accuracy of Next-Generation-Sequencing-Based Bacterial Genotyping in a Ring Trial.

    PubMed

    Mellmann, Alexander; Andersen, Paal Skytt; Bletz, Stefan; Friedrich, Alexander W; Kohl, Thomas A; Lilje, Berit; Niemann, Stefan; Prior, Karola; Rossen, John W; Harmsen, Dag

    2017-03-01

    Today, next-generation whole-genome sequencing (WGS) is increasingly used to determine the genetic relationships of bacteria on a nearly whole-genome level for infection control purposes and molecular surveillance. Here, we conducted a multicenter ring trial comprising five laboratories to determine the reproducibility and accuracy of WGS-based typing. The participating laboratories sequenced 20 blind-coded Staphylococcus aureus DNA samples using 250-bp paired-end chemistry for library preparation in a single sequencing run on an Illumina MiSeq sequencer. The run acceptance criteria were sequencing outputs >5.6 Gb and Q30 read quality scores of >75%. Subsequently, spa typing, multilocus sequence typing (MLST), ribosomal MLST, and core genome MLST (cgMLST) were performed by the participants. Moreover, discrepancies in cgMLST target sequences in comparisons with the included and also published sequence of the quality control strain ATCC 25923 were resolved using Sanger sequencing. All five laboratories fulfilled the run acceptance criteria in a single sequencing run without any repetition. Of the 400 total possible typing results, 394 of the reported spa types, sequence types (STs), ribosomal STs (rSTs), and cgMLST cluster types were correct and identical among all laboratories; only six typing results were missing. An analysis of cgMLST allelic profiles corroborated this high reproducibility; only 3 of 183,927 (0.0016%) cgMLST allele calls were wrong. Sanger sequencing confirmed all 12 discrepancies of the ring trial results in comparison with the published sequence of ATCC 25923. In summary, this ring trial demonstrated the high reproducibility and accuracy of current next-generation sequencing-based bacterial typing for molecular surveillance when done with nearly completely locked-down methods.

  9. High Interlaboratory Reproducibility and Accuracy of Next-Generation-Sequencing-Based Bacterial Genotyping in a Ring Trial

    PubMed Central

    Andersen, Paal Skytt; Bletz, Stefan; Friedrich, Alexander W.; Kohl, Thomas A.; Lilje, Berit; Niemann, Stefan; Prior, Karola; Rossen, John W.; Harmsen, Dag

    2017-01-01

    ABSTRACT Today, next-generation whole-genome sequencing (WGS) is increasingly used to determine the genetic relationships of bacteria on a nearly whole-genome level for infection control purposes and molecular surveillance. Here, we conducted a multicenter ring trial comprising five laboratories to determine the reproducibility and accuracy of WGS-based typing. The participating laboratories sequenced 20 blind-coded Staphylococcus aureus DNA samples using 250-bp paired-end chemistry for library preparation in a single sequencing run on an Illumina MiSeq sequencer. The run acceptance criteria were sequencing outputs >5.6 Gb and Q30 read quality scores of >75%. Subsequently, spa typing, multilocus sequence typing (MLST), ribosomal MLST, and core genome MLST (cgMLST) were performed by the participants. Moreover, discrepancies in cgMLST target sequences in comparisons with the included and also published sequence of the quality control strain ATCC 25923 were resolved using Sanger sequencing. All five laboratories fulfilled the run acceptance criteria in a single sequencing run without any repetition. Of the 400 total possible typing results, 394 of the reported spa types, sequence types (STs), ribosomal STs (rSTs), and cgMLST cluster types were correct and identical among all laboratories; only six typing results were missing. An analysis of cgMLST allelic profiles corroborated this high reproducibility; only 3 of 183,927 (0.0016%) cgMLST allele calls were wrong. Sanger sequencing confirmed all 12 discrepancies of the ring trial results in comparison with the published sequence of ATCC 25923. In summary, this ring trial demonstrated the high reproducibility and accuracy of current next-generation sequencing-based bacterial typing for molecular surveillance when done with nearly completely locked-down methods. PMID:28053217

  10. "High-precision, reconstructed 3D model" of skull scanned by conebeam CT: Reproducibility verified using CAD/CAM data.

    PubMed

    Katsumura, Seiko; Sato, Keita; Ikawa, Tomoko; Yamamura, Keiko; Ando, Eriko; Shigeta, Yuko; Ogawa, Takumi

    2016-01-01

    Computed tomography (CT) scanning has recently been introduced into forensic medicine and dentistry. However, the presence of metal restorations in the dentition can adversely affect the quality of three-dimensional reconstruction from CT scans. In this study, we aimed to evaluate the reproducibility of a "high-precision, reconstructed 3D model" obtained from a conebeam CT scan of dentition, a method that might be particularly helpful in forensic medicine. We took conebeam CT and helical CT images of three dry skulls marked with 47 measuring points; reconstructed three-dimensional images; and measured the distances between the points in the 3D images with a computer-aided design/computer-aided manufacturing (CAD/CAM) marker. We found that in comparison with the helical CT, conebeam CT is capable of reproducing measurements closer to those obtained from the actual samples. In conclusion, our study indicated that the image-reproduction from a conebeam CT scan was more accurate than that from a helical CT scan. Furthermore, the "high-precision reconstructed 3D model" facilitates reliable visualization of full-sized oral and maxillofacial regions in both helical and conebeam CT scans.

  11. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  12. Accuracy and Precision of Partial-Volume Correction in Oncological PET/CT Studies.

    PubMed

    Cysouw, Matthijs C F; Kramer, Gerbrand Maria; Hoekstra, Otto S; Frings, Virginie; de Langen, Adrianus Johannes; Smit, Egbert F; van den Eertwegh, Alfons J M; Oprea-Lager, Daniela E; Boellaard, Ronald

    2016-10-01

    Accurate quantification of tracer uptake in small tumors using PET is hampered by the partial-volume effect as well as by the method of volume-of-interest (VOI) delineation. This study aimed to investigate the effect of partial-volume correction (PVC) combined with several VOI methods on the accuracy and precision of quantitative PET.

  13. Improving the accuracy and precision of cognitive testing in mild dementia.

    PubMed

    Wouters, Hans; Appels, Bregje; van der Flier, Wiesje M; van Campen, Jos; Klein, Martin; Zwinderman, Aeilko H; Schmand, Ben; van Gool, Willem A; Scheltens, Philip; Lindeboom, Robert

    2012-03-01

    The CAMCOG, ADAS-cog, and MMSE, designed to grade global cognitive ability in dementia have inadequate precision and accuracy in distinguishing mild dementia from normal ageing. Adding neuropsychological tests to their scale might improve precision and accuracy in mild dementia. We, therefore, pooled neuropsychological test-batteries from two memory clinics (ns = 135 and 186) with CAMCOG data from a population study and 2 memory clinics (n = 829) and ADAS-cog data from 3 randomized controlled trials (n = 713) to estimate a common dimension of global cognitive ability using Rasch analysis. Item difficulties and individuals' global cognitive ability levels were estimated. Difficulties of 57 items (of 64) could be validly estimated. Neuropsychological tests were more difficult than the CAMCOG, ADAS-cog, and MMSE items. Most neuropsychological tests had difficulties in the ability range of normal ageing to mild dementia. Higher than average ability levels were more precisely measured when neuropsychological tests were added to the MMSE than when these were measured with the MMSE alone. Diagnostic accuracy in mild dementia was consistently better after adding neuropsychological tests to the MMSE. We conclude that extending dementia specific instruments with neuropsychological tests improves measurement precision and accuracy of cognitive impairment in mild dementia.

  14. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  15. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What are the precision and accuracy....584 What are the precision and accuracy criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision....

  16. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the precision and accuracy....584 What are the precision and accuracy criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision....

  17. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárrga-Gutierrez, J. M.; García-Garduño, O. A.; Juárez, J.; Prieto, I.; Moreno-Jiménez, S.; Celis, M. A.

    2008-08-01

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  18. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  19. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System

    PubMed Central

    Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.

    2015-01-01

    Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence

  20. Maximizing the quantitative accuracy and reproducibility of Förster resonance energy transfer measurement for screening by high throughput widefield microscopy

    PubMed Central

    Schaufele, Fred

    2013-01-01

    Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839

  1. Evaluation of precision and accuracy assessment of different 3-D surface imaging systems for biomedical purposes.

    PubMed

    Eder, Maximilian; Brockmann, Gernot; Zimmermann, Alexander; Papadopoulos, Moschos A; Schwenzer-Zimmerer, Katja; Zeilhofer, Hans Florian; Sader, Robert; Papadopulos, Nikolaos A; Kovacs, Laszlo

    2013-04-01

    Three-dimensional (3-D) surface imaging has gained clinical acceptance, especially in the field of cranio-maxillo-facial and plastic, reconstructive, and aesthetic surgery. Six scanners based on different scanning principles (Minolta Vivid 910®, Polhemus FastSCAN™, GFM PRIMOS®, GFM TopoCAM®, Steinbichler Comet® Vario Zoom 250, 3dMD DSP 400®) were used to measure five sheep skulls of different sizes. In three areas with varying anatomical complexity (areas, 1 = high; 2 = moderate; 3 = low), 56 distances between 20 landmarks are defined on each skull. Manual measurement (MM), coordinate machine measurements (CMM) and computer tomography (CT) measurements were used to define a reference method for further precision and accuracy evaluation of different 3-D scanning systems. MM showed high correlation to CMM and CT measurements (both r = 0.987; p < 0.001) and served as the reference method. TopoCAM®, Comet® and Vivid 910® showed highest measurement precision over all areas of complexity; Vivid 910®, the Comet® and the DSP 400® demonstrated highest accuracy over all areas with Vivid 910® being most accurate in areas 1 and 3, and the DSP 400® most accurate in area 2. In accordance to the measured distance length, most 3-D devices present higher measurement precision and accuracy for large distances and lower degrees of precision and accuracy for short distances. In general, higher degrees of complexity are associated with lower 3-D assessment accuracy, suggesting that for optimal results, different types of scanners should be applied to specific clinical applications and medical problems according to their special construction designs and characteristics.

  2. A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services

    NASA Astrophysics Data System (ADS)

    Malinowski, Marcin; Kwiecień, Janusz

    2016-12-01

    Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.

  3. Precision and Accuracy in the Determination of Sulfur Oxides, Fluoride, and Spherical Aluminosilicate Fly Ash Particles in Project MOHAVE.

    PubMed

    Eatough, Norman L; Eatough, Michele; Joseph, Jyothi M; Caka, Fern M; Lewis, Laura; Eatough, Delbert J

    1997-04-01

    The precision and accuracy of the determination of particulate sulfate and fluoride, and gas phase S02 and HF are estimated from the results obtained from collocated replicate samples and from collocated comparison samples for highland low-volume filter pack and annular diffusion denuder samplers. The results of replicate analysis of collocated samples and replicate analyses of a given sample for the determination of spherical aluminosilicate fly ash particles have also been compared. Each of these species is being used in the chemical mass balance source apportionment of sulfur oxides in the Grand Canyon region as part of Project MOHAVE, and the precision and accuracy analyses given in this paper provide input to that analysis. The precision of the various measurements reported here is ±1.8 nmol/m(3) and ±2.5 nmol/m(3) for the determination of S02 and sulfate, respectively, with an annular denuder. The precision is ±0.5 nmol/m(3) and ±2.0 nmol/m(3) for the determination of the same species with a high-volume or low-volume filter pack. The precision for the determination of the sum of HF(g) and fine particulate fluoride is +0.3 nmol/m(3). The precision for the determination of aluminosilicate fly ash particles is ±100 particles/m(3). At high concentrations of the various species, reproducibility of the various measurements is ±10% to ±14% of the measured concentration. The concentrations of sulfate determined using filter pack samplers are frequently higher than those determined using diffusion denuder sampling systems. The magnitude of the difference (e.g., 2-10 nmol sulfate/m(3)) is small, but important relative to the precision of the data and the concentrations of particulate sulfate present (typically 5-20 nmol sulfate/m(3)). The concentrations of S02(g) determined using a high-volume cascade impactor filter pack sampler are correspondingly lower than those obtained with diffusion denuder samplers. The concentrations of SOx (SOz(g) plus particulate

  4. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  5. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  6. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  7. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  8. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  9. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    PubMed

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  10. Theoretical study of precision and accuracy of strain analysis by nano-beam electron diffraction.

    PubMed

    Mahr, Christoph; Müller-Caspary, Knut; Grieb, Tim; Schowalter, Marco; Mehrtens, Thorsten; Krause, Florian F; Zillmann, Dennis; Rosenauer, Andreas

    2015-11-01

    Measurement of lattice strain is important to characterize semiconductor nanostructures. As strain has large influence on the electronic band structure, methods for the measurement of strain with high precision, accuracy and spatial resolution in a large field of view are mandatory. In this paper we present a theoretical study of precision and accuracy of measurement of strain by convergent nano-beam electron diffraction. It is found that the accuracy of the evaluation suffers from halos in the diffraction pattern caused by a variation of strain within the area covered by the focussed electron beam. This effect, which is expected to be strong at sharp interfaces between materials with different lattice plane distances, will be discussed for convergent-beam electron diffraction patterns using a conventional probe and for patterns formed by a precessing electron beam. Furthermore, we discuss approaches to optimize the accuracy of strain measured at interfaces. The study is based on the evaluation of diffraction patterns simulated for different realistic structures that have been investigated experimentally in former publications. These simulations account for thermal diffuse scattering using the frozen-lattice approach and the modulation-transfer function of the image-recording system. The influence of Poisson noise is also investigated.

  11. Accelerator mass spectrometry best practices for accuracy and precision in bioanalytical (14)C measurements.

    PubMed

    Vogel, John S; Giacomo, Jason A; Schulze-König, Tim; Keck, Bradly D; Lohstroh, Peter; Dueker, Stephen

    2010-03-01

    Accelerator mass spectrometers have an energy acceleration and charge exchange between mass definition stages to destroy molecular isobars and allow single ion counting of long-lived isotopes such as (14)C (t½=5370 years.). 'Low' voltage accelerations to 200 kV allow laboratory-sized accelerator mass spectrometers instruments for bioanalytical quantitation of (14)C to 2-3% precision and accuracy in isolated biochemical fractions. After demonstrating this accuracy and precision for our new accelerator mass spectrometer, we discuss the critical aspects of maintaining quantitative accuracy from the defined biological fraction to the accelerator mass spectrometry quantitation. These aspects include sufficient sample mass for routine rapid sample preparation, isotope dilution to assure this mass, isolation of the carbon from other sample combustion gasses and use of high-efficiency biochemical separations. This review seeks to address a bioanalytical audience, who should know that high accuracy data of physiochemical processes within living human subjects are available, as long as a (14)C quantitation can be made indicative of the physiochemistry of interest.

  12. Navigated non-image-based registration of the position of the pelvis during THR. An accuracy and reproducibility study.

    PubMed

    Jenny, Jean-Yves; Boeri, Cyril; Ciobanu, Eugen

    2008-05-01

    The precise recording of the position of the pelvis is a prerequisite for total hip replacement (THR). The anterior pelvic plane is an accepted reference for determining the 3D pelvic orientation. We hypothesized that cutaneous palpation of this plane was accurate and reproducible. Ten consecutive navigated implantations of THR prostheses were studied. In each case, four palpations of both anterior iliac spines and the pubic symphysis were performed with a navigated stylus. The first palpation was made on actual bone contours through a skin puncture and was considered as the reference. The other three palpations were made over the intact skin as a normal intra-operative procedure. There was no significant difference between the pelvic orientations measured by the three cutaneous palpations, or between the orientations measured by transcutaneous palpation and the mean result with cutaneous palpation. Cutaneous palpation can be considered as a reliable technique for the definition of pelvic orientation during THR with the non-image-based system employed.

  13. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    NASA Astrophysics Data System (ADS)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  14. Enhancement of accuracy and reproducibility of parametric modeling for estimating abnormal intra-QRS potentials in signal-averaged electrocardiograms.

    PubMed

    Lin, Chun-Cheng

    2008-09-01

    This work analyzes and attempts to enhance the accuracy and reproducibility of parametric modeling in the discrete cosine transform (DCT) domain for the estimation of abnormal intra-QRS potentials (AIQP) in signal-averaged electrocardiograms. One hundred sets of white noise with a flat frequency response were introduced to simulate the unpredictable, broadband AIQP when quantitatively analyzing estimation error. Further, a high-frequency AIQP parameter was defined to minimize estimation error caused by the overlap between normal QRS and AIQP in low-frequency DCT coefficients. Seventy-two patients from Taiwan were recruited for the study, comprising 30 patients with ventricular tachycardia (VT) and 42 without VT. Analytical results showed that VT patients had a significant decrease in the estimated AIQP. The global diagnostic performance (area under the receiver operating characteristic curve) of AIQP rose from 73.0% to 84.2% in lead Y, and from 58.3% to 79.1% in lead Z, when the high-frequency range fell from 100% to 80%. The combination of AIQP and ventricular late potentials further enhanced performance to 92.9% (specificity=90.5%, sensitivity=90%). Therefore, the significantly reduced AIQP in VT patients, possibly also including dominant unpredictable potentials within the normal QRS complex, may be new promising evidence of ventricular arrhythmias.

  15. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  16. Multi-centre evaluation of accuracy and reproducibility of planar and SPECT image quantification: An IAEA phantom study.

    PubMed

    Zimmerman, Brian E; Grošev, Darko; Buvat, Irène; Coca Pérez, Marco A; Frey, Eric C; Green, Alan; Krisanachinda, Anchali; Lassmann, Michael; Ljungberg, Michael; Pozzo, Lorena; Quadir, Kamila Afroj; Terán Gretter, Mariella A; Van Staden, Johann; Poli, Gian Luca

    2016-04-19

    Accurate quantitation of activity provides the basis for internal dosimetry of targeted radionuclide therapies. This study investigated quantitative imaging capabilities at sites with a variety of experience and equipment and assessed levels of errors in activity quantitation in Single-Photon Emission Computed Tomography (SPECT) and planar imaging. Participants from 9 countries took part in a comparison in which planar, SPECT and SPECT with X ray computed tomography (SPECT-CT) imaging were used to quantify activities of four epoxy-filled cylinders containing (133)Ba, which was chosen as a surrogate for (131)I. The sources, with nominal volumes of 2, 4, 6 and 23mL, were calibrated for (133)Ba activity by the National Institute of Standards and Technology, but the activity was initially unknown to the participants. Imaging was performed in a cylindrical phantom filled with water. Two trials were carried out in which the participants first estimated the activities using their local standard protocols, and then repeated the measurements using a standardized acquisition and analysis protocol. Finally, processing of the imaging data from the second trial was repeated by a single centre using a fixed protocol. In the first trial, the activities were underestimated by about 15% with planar imaging. SPECT with Chang's first order attenuation correction (Chang-AC) and SPECT-CT overestimated the activity by about 10%. The second trial showed moderate improvements in accuracy and variability. Planar imaging was subject to methodological errors, e.g., in the use of a transmission scan for attenuation correction. The use of Chang-AC was subject to variability from the definition of phantom contours. The project demonstrated the need for training and standardized protocols to achieve good levels of quantitative accuracy and precision in a multicentre setting. Absolute quantification of simple objects with no background was possible with the strictest protocol to about 6% with

  17. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  18. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  19. Accuracy and Reproducibility in Quantification of Plasma Protein Concentrations by Mass Spectrometry without the Use of Isotopic Standards

    PubMed Central

    Kramer, Gertjan; Woolerton, Yvonne; van Straalen, Jan P.; Vissers, Johannes P. C.; Dekker, Nick; Langridge, James I.; Beynon, Robert J.; Speijer, Dave; Sturk, Auguste; Aerts, Johannes M. F. G.

    2015-01-01

    Background Quantitative proteomic analysis with mass spectrometry holds great promise for simultaneously quantifying proteins in various biosamples, such as human plasma. Thus far, studies addressing the reproducible measurement of endogenous protein concentrations in human plasma have focussed on targeted analyses employing isotopically labelled standards. Non-targeted proteomics, on the other hand, has been less employed to this end, even though it has been instrumental in discovery proteomics, generating large datasets in multiple fields of research. Results Using a non-targeted mass spectrometric assay (LCMSE), we quantified abundant plasma proteins (43 mg/mL—40 ug/mL range) in human blood plasma specimens from 30 healthy volunteers and one blood serum sample (ProteomeXchange: PXD000347). Quantitative results were obtained by label-free mass spectrometry using a single internal standard to estimate protein concentrations. This approach resulted in quantitative results for 59 proteins (cut off ≥11 samples quantified) of which 41 proteins were quantified in all 31 samples and 23 of these with an inter-assay variability of ≤ 20%. Results for 7 apolipoproteins were compared with those obtained using isotope-labelled standards, while 12 proteins were compared to routine immunoassays. Comparison of quantitative data obtained by LCMSE and immunoassays showed good to excellent correlations in relative protein abundance (r = 0.72–0.96) and comparable median concentrations for 8 out of 12 proteins tested. Plasma concentrations of 56 proteins determined by LCMSE were of similar accuracy as those reported by targeted studies and 7 apolipoproteins quantified by isotope-labelled standards, when compared to reference concentrations from literature. Conclusions This study shows that LCMSE offers good quantification of relative abundance as well as reasonable estimations of concentrations of abundant plasma proteins. PMID:26474480

  20. Radiographic total disc replacement angle measurement accuracy using the Oxford Cobbometer: precision and bias

    PubMed Central

    Stafylas, Kosmas; McManus, John; Schizas, Constantin

    2008-01-01

    Total disc replacement (TDR) clinical success has been reported to be related to the residual motion of the operated level. Thus, accurate measurement of TDR range of motion (ROM) is of utmost importance. One commonly used tool in measuring ROM is the Oxford Cobbometer. Little is known however on its accuracy (precision and bias) in measuring TDR angles. The aim of this study was to assess the ability of the Cobbometer to accurately measure radiographic TDR angles. An anatomically accurate synthetic L4–L5 motion segment was instrumented with a CHARITE artificial disc. The TDR angle and anatomical position between L4 and L5 was fixed to prohibit motion while the motion segment was radiographically imaged in various degrees of rotation and elevation, representing a sample of possible patient placement positions. An experienced observer made ten readings of the TDR angle using the Cobbometer at each different position. The Cobbometer readings were analyzed to determine measurement accuracy at each position. Furthermore, analysis of variance was used to study rotation and elevation of the motion segment as treatment factors. Cobbometer TDR angle measurements were most accurate (highest precision and lowest bias) at the centered position (95.5%), which placed the TDR directly inline with the x-ray beam source without any rotation. In contrast, the lowest accuracy (75.2%) was observed in the most rotated and off-centered view. A difference as high as 4° between readings at any individual position, and as high as 6° between all the positions was observed. Furthermore, the Cobbometer was unable to detect the expected trend in TDR angle projection with changing position. Although the Cobbometer has been reported to be reliable in different clinical applications, it lacks the needed accuracy to measure TDR angles and ROM. More accurate ROM measurement methods need to be developed to help surgeons and researchers assess radiological success of TDRs. PMID:18496719

  1. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    PubMed Central

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-01-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl− electrodes, 10 F− electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity. PMID:28303939

  2. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  3. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    NASA Astrophysics Data System (ADS)

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl‑ electrodes, 10 F‑ electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  4. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision.

    PubMed

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-17

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl(-) electrodes, 10 F(-) electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  5. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  6. Automated tracking of colloidal clusters with sub-pixel accuracy and precision

    NASA Astrophysics Data System (ADS)

    van der Wel, Casper; Kraft, Daniela J.

    2017-02-01

    Quantitative tracking of features from video images is a basic technique employed in many areas of science. Here, we present a method for the tracking of features that partially overlap, in order to be able to track so-called colloidal molecules. Our approach implements two improvements into existing particle tracking algorithms. Firstly, we use the history of previously identified feature locations to successfully find their positions in consecutive frames. Secondly, we present a framework for non-linear least-squares fitting to summed radial model functions and analyze the accuracy (bias) and precision (random error) of the method on artificial data. We find that our tracking algorithm correctly identifies overlapping features with an accuracy below 0.2% of the feature radius and a precision of 0.1 to 0.01 pixels for a typical image of a colloidal cluster. Finally, we use our method to extract the three-dimensional diffusion tensor from the Brownian motion of colloidal dimers. , which features invited work from the best early-career researchers working within the scope of Journal of Physics: Condensed Matter. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Daniela Kraft was selected by the Editorial Board of Journal of Physics: Condensed Matter as an Emerging Leader.

  7. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    SciTech Connect

    Dodd, D.A.

    1995-02-02

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources.

  8. Freehand liver volumetry by using an electromagnetic pen tablet: accuracy, precision, and rapidity.

    PubMed

    Perandini, Simone; Faccioli, Niccolò; Inama, Marco; Pozzi Mucelli, Roberto

    2011-04-01

    The purpose of this study is to assess the accuracy, precision, and rapidity of liver volumes calculated by using a freehand electromagnetic pen tablet contourtracing method as compared with the volumes calculated by using the standard optical mouse contourtracing method. The imaging data used as input for accuracy and precision testing were computed by software developed in our institution. This computer software can generate models of solid organs and allows both standard mouse-based and electromagnetic pen-driven segmentation (number of data sets, n = 70). The images used as input for rapidity testing was partly computed by modeling software (n = 70) and partly selected from contrast-enhanced computed tomography (CT) examinations (n = 12). Mean volumes and time required to perform the segmentation, along with standard deviation and range values with both techniques, were calculated. Student's t test was used to assess significance regarding mean volumes and time calculated by using both segmentation techniques on phantom and CT data sets. P value was also calculated. The mean volume difference was significantly lower with the use of the freehand electromagnetic pen as compared with the optical mouse (0.2% vs. 1.8%; P < .001). The mean segmentation time per patient was significantly shorter with the use of the freehand electromagnetic pen contourtracing method (354.5 vs. 499.1 s on phantoms; 457.4 vs. 610.0 s on CT images; P < .001). Freehand electromagnetic pen-based volumetric technique represents a technologic advancement over manual mouse-based contourtracing because of the superior statistical accuracy and sensibly shorter time required. Further studies focused on intra- and interobserver variability of the technique need to be performed before its introduction in clinical application.

  9. Keystroke dynamics and timing: accuracy, precision and difference between hands in pianist's performance.

    PubMed

    Minetti, Alberto E; Ardigò, Luca P; McKee, Tom

    2007-01-01

    A commercially available acoustic grand piano, originally provided with keystroke speed sensors, is proposed as a standard instrument to quantitatively assess the technical side of pianist's performance, after the mechanical characteristics of the keyboard have been measured. We found a positional dependence of the relationship between the applied force and the resulting downstroke speed (i.e. treble keys descend fastest) due to the different hammer/hammer shaft mass to be accelerated. When this effect was removed by a custom software, the ability of 14 pianists was analysed in terms of variability in stroke intervals and keystroke speeds. C-major scales played by separate hands at different imposed tempos and at 5 subjectively chosen graded force levels were analysed to get insights into the achieved neuromuscular control. Accuracy and precision of time intervals and descent velocity of keystrokes were obtained by processing the generated MIDI files. The results quantitatively show: the difference between hands, the trade off between force range and tempo, and between time interval precision and tempo, the lower precision of descent speed associated to 'soft' playing, etc. Those results reflect well-established physiological and motor control characteristics of our movement system. Apart from revealing fundamental aspects of pianism, the proposed method could be used as a standard tool also for ergonomic (e.g. the mechanical work and power of playing), didactic and rehabilitation monitoring of pianists.

  10. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  11. A Preanalytic Validation Study of Automated Bone Scan Index: Effect on Accuracy and Reproducibility Due to the Procedural Variabilities in Bone Scan Image Acquisition.

    PubMed

    Anand, Aseem; Morris, Michael J; Kaboteh, Reza; Reza, Mariana; Trägårdh, Elin; Matsunaga, Naofumi; Edenbrandt, Lars; Bjartell, Anders; Larson, Steven M; Minarik, David

    2016-12-01

    The effect of the procedural variability in image acquisition on the quantitative assessment of bone scan is unknown. Here, we have developed and performed preanalytical studies to assess the impact of the variability in scanning speed and in vendor-specific γ-camera on reproducibility and accuracy of the automated bone scan index (BSI).

  12. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  13. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer.

  14. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  15. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  16. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; ...

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  17. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  18. A benchmark test of accuracy and precision in estimating dynamical systems characteristics from a time series.

    PubMed

    Rispens, S M; Pijnappels, M; van Dieën, J H; van Schooten, K S; Beek, P J; Daffertshofer, A

    2014-01-22

    Characteristics of dynamical systems are often estimated to describe physiological processes. For instance, Lyapunov exponents have been determined to assess the stability of the cardio-vascular system, respiration, and, more recently, human gait and posture. However, the systematic evaluation of the accuracy and precision of these estimates is problematic because the proper values of the characteristics are typically unknown. We fill this void with a set of standardized time series with well-defined dynamical characteristics that serve as a benchmark. Estimates ought to match these characteristics, at least to good approximation. We outline a procedure to employ this generic benchmark test and illustrate its capacity by examining methods for estimating the maximum Lyapunov exponent. In particular, we discuss algorithms by Wolf and co-workers and by Rosenstein and co-workers and evaluate their performances as a function of signal length and signal-to-noise ratio. In all scenarios, the precision of Rosenstein's algorithm was found to be equal to or greater than Wolf's algorithm. The latter, however, appeared more accurate if reasonably large signal lengths are available and noise levels are sufficiently low. Due to its modularity, the presented benchmark test can be used to evaluate and tune any estimation method to perform optimally for arbitrary experimental data.

  19. Increasing accuracy and precision of digital image correlation through pattern optimization

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Hochhalter, J. D.; Ruggles, T. J.; Cannon, A. H.

    2017-04-01

    The accuracy and precision of digital image correlation (DIC) is based on three primary components: image acquisition, image analysis, and the subject of the image. Focus on the third component, the image subject, has been relatively limited and primarily concerned with comparing pseudo-random surface patterns. In the current work, a strategy is proposed for the creation of optimal DIC patterns. In this strategy, a pattern quality metric is developed as a combination of quality metrics from the literature rather than optimization based on any single one of them. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. Specifically, sum of square of subset intensity gradients (SSSIG) was found to be the metric most strongly correlated to DIC accuracy and thus is the main component of the newly proposed pattern quality metric. A term related to the secondary auto-correlation peak height is also part of the proposed quality metric which effectively acts as a constraint upon SSSIG ensuring that a regular (e.g., checkerboard-type) pattern is not achieved. The combined pattern quality metric is used to generate a pattern that was on average 11.6% more accurate than a randomly generated pattern in a suite of numerical experiments. Furthermore, physical experiments were performed which confirm that there is indeed improvement of a similar magnitude in DIC measurements for the optimized pattern compared to a random pattern.

  20. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  1. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point

  2. Cumulative incidence of childhood autism: a total population study of better accuracy and precision.

    PubMed

    Honda, Hideo; Shimizu, Yasuo; Imai, Miho; Nitto, Yukari

    2005-01-01

    Most studies on the frequency of autism have had methodological problems. Most notable of these have been differences in diagnostic criteria between studies, degree of cases overlooked by the initial screening, and type of measurement. This study aimed to replicate the first report on childhood autism to address cumulative incidence as well as prevalence, as defined in the International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD-10) Diagnostic Criteria for Research. Here, the same methodological accuracy (exactness of a measurement to the true value) as the first study was used, but population size was four times larger to achieve greater precision (reduction of random error). A community-oriented system of early detection and early intervention for developmental disorders was established in the northern part of Yokohama, Japan. The city's routine health checkup for 18-month-old children served as the initial mass screening, and all facilities that provided child care services aimed to detect all cases of childhood autism and refer them to the Yokohama Rehabilitation Center. Cumulative incidence up to age 5 years was calculated for childhood autism among a birth cohort from four successive years (1988 to 1991). Cumulative incidence of childhood autism was 27.2 per 10000. Cumulative incidences by sex were 38.4 per 10000 in males, and 15.5 per 10000 in females. The male:female ratio was 2.5:1. The proportions of children with high-functioning autism who had Binet IQs of 70 and over and those with Binet IQs of 85 and over were 25.3% and 13.7% respectively. Data on cumulative incidence of childhood autism derived from this study are the first to be drawn from an accurate, as well as precise, screening methodology.

  3. Intra- and inter-laboratory reproducibility and accuracy of the LuSens assay: A reporter gene-cell line to detect keratinocyte activation by skin sensitizers.

    PubMed

    Ramirez, Tzutzuy; Stein, Nadine; Aumann, Alexandra; Remus, Tina; Edwards, Amber; Norman, Kimberly G; Ryan, Cindy; Bader, Jackie E; Fehr, Markus; Burleson, Florence; Foertsch, Leslie; Wang, Xiaohong; Gerberick, Frank; Beilstein, Paul; Hoffmann, Sebastian; Mehling, Annette; van Ravenzwaay, Bennard; Landsiedel, Robert

    2016-04-01

    Several non-animal methods are now available to address the key events leading to skin sensitization as defined by the adverse outcome pathway. The KeratinoSens assay addresses the cellular event of keratinocyte activation and is a method accepted under OECD TG 442D. In this study, the results of an inter-laboratory evaluation of the "me-too" LuSens assay, a bioassay that uses a human keratinocyte cell line harboring a reporter gene construct composed of the rat antioxidant response element (ARE) of the NADPH:quinone oxidoreductase 1 gene and the luciferase gene, are described. Earlier in-house validation with 74 substances showed an accuracy of 82% in comparison to human data. When used in a battery of non-animal methods, even higher predictivity is achieved. To meet European validation criteria, a multicenter study was conducted in 5 laboratories. The study was divided into two phases, to assess 1) transferability of the method, and 2) reproducibility and accuracy. Phase I was performed by testing 8 non-coded test substances; the results showed a good transferability to naïve laboratories even without on-site training. Phase II was performed with 20 coded test substances (performance standards recommended by OECD, 2015). In this phase, the intra- and inter-laboratory reproducibility as well as accuracy of the method was evaluated. The data demonstrate a remarkable reproducibility of 100% and an accuracy of over 80% in identifying skin sensitizers, indicating a good concordance with in vivo data. These results demonstrate good transferability, reliability and accuracy of the method thereby achieving the standards necessary for use in a regulatory setting to detect skin sensitizers.

  4. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The

  5. Analysis of Current Position Determination Accuracy in Natural Resources Canada Precise Point Positioning Service

    NASA Astrophysics Data System (ADS)

    Krzan, Grzegorz; Dawidowicz, Karol; Krzysztof, Świaţek

    2013-09-01

    Precise Point Positioning (PPP) is a technique used to determine highprecision position with a single GNSS receiver. Unlike DGPS or RTK, satellite observations conducted by the PPP technique are not differentiated, therefore they require that parameter models should be used in data processing, such as satellite clock and orbit corrections. Apart from explaining the theory of the PPP technique, this paper describes the available web-based online services used in the post-processing of observation results. The results obtained in the post-processing of satellite observations at three points, with different characteristics of environment conditions, using the CSRS-PPP service, will be presented as the results of the experiment. This study examines the effect of the duration of the measurement session on the results and compares the results obtained by working out observations made by the GPS system and the combined observations from GPS and GLONASS. It also presents the analysis of the position determination accuracy using one and two measurement frequencies

  6. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  7. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance.

    PubMed

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N; Leduc, Charles A; Leibel, Rudolph L

    2009-01-01

    OBJECTIVE: To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus(™)) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. SUBJECTS AND MEASUREMENTS: Thirty lean and obese mice (body weight range 19-67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10-25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. RESULTS: In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: -0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r(2)=0.92 and r(2)=0.99 for DXA LTM and FTM, respectively; r(2)=0.99 and r(2)=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. CONCLUSION: DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently

  8. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance

    PubMed Central

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N.; Leduc, Charles A.; Leibel, Rudolph L.

    2011-01-01

    Objective To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus™) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. Subjects and measurements Thirty lean and obese mice (body weight range 19–67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10–25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. Results In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: −0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r2=0.92 and r2=0.99 for DXA LTM and FTM, respectively; r2=0.99 and r2=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. Conclusion DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently overestimates LTM

  9. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  10. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Baurecht, H; Schwenzer-Zimmerer, K; Papadopulos, N A; Papadopoulos, M A; Sader, R; Biemer, E; Zeilhofer, H F

    2006-06-01

    Three-dimensional (3-D) recording of the surface of the human body or anatomical areas has gained importance in many medical specialties. Thus, it is important to determine scanner precision and accuracy in defined medical applications and to establish standards for the recording procedure. Here we evaluated the precision and accuracy of 3-D assessment of the facial area with the Minolta Vivid 910 3D Laser Scanner. We also investigated the influence of factors related to the recording procedure and the processing of scanner data on final results. These factors include lighting, alignment of scanner and object, the examiner, and the software used to convert measurements into virtual images. To assess scanner accuracy, we compared scanner data to those obtained by manual measurements on a dummy. Less than 7% of all results with the scanner method were outside a range of error of 2 mm when compared to corresponding reference measurements. Accuracy, thus, proved to be good enough to satisfy requirements for numerous clinical applications. Moreover, the experiments completed with the dummy yielded valuable information for optimizing recording parameters for best results. Thus, under defined conditions, precision and accuracy of surface models of the human face recorded with the Minolta Vivid 910 3D Scanner presumably can also be enhanced. Future studies will involve verification of our findings using test persons. The current findings indicate that the Minolta Vivid 910 3D Scanner might be used with benefit in medicine when recording the 3-D surface structures of the face.

  11. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  12. Using statistics and software to maximize precision and accuracy in U-Pb geochronological measurements

    NASA Astrophysics Data System (ADS)

    McLean, N.; Bowring, J. F.; Bowring, S. A.

    2009-12-01

    Uncertainty in U-Pb geochronology results from a wide variety of factors, including isotope ratio determinations, common Pb corrections, initial daughter product disequilibria, instrumental mass fractionation, isotopic tracer calibration, and U decay constants and isotopic composition. The relative contribution of each depends on the proportion of radiogenic to common Pb, the measurement technique, and the quality of systematic error determinations. Random and systematic uncertainty contributions may be propagated into individual analyses or for an entire population, and must be propagated correctly to accurately interpret data. Tripoli and U-Pb_Redux comprise a new data reduction and error propagation software package that combines robust cycle measurement statistics with rigorous multivariate data analysis and presents the results graphically and interactively. Maximizing the precision and accuracy of a measurement begins with correct appraisal and codification of the systematic and random errors for each analysis. For instance, a large dataset of total procedural Pb blank analyses defines a multivariate normal distribution, describing the mean of and variation in isotopic composition (IC) that must be subtracted from each analysis. Uncertainty in the size and IC of each Pb blank is related to the (random) uncertainty in ratio measurements and the (systematic) uncertainty involved in tracer subtraction. Other sample and measurement parameters can be quantified in the same way, represented as statistical distributions that describe their uncertainty or variation, and are input into U-Pb_Redux as such before the raw sample isotope ratios are measured. During sample measurement, U-Pb_Redux and Tripoli can relay cycle data in real time, calculating a date and uncertainty for each new cycle or block. The results are presented in U-Pb_Redux as an interactive user interface with multiple visualization tools. One- and two-dimensional plots of each calculated date and

  13. Accuracy and reproducibility of virtual cutting guides and 3D-navigation for osteotomies of the mandible and maxilla

    PubMed Central

    Bernstein, Jonathan M.; Daly, Michael J.; Chan, Harley; Qiu, Jimmy; Goldstein, David; Muhanna, Nidal; de Almeida, John R.; Irish, Jonathan C.

    2017-01-01

    Background We set out to determine the accuracy of 3D-navigated mandibular and maxillary osteotomies with the ultimate aim to integrate virtual cutting guides and 3D-navigation into ablative and reconstructive head and neck surgery. Methods Four surgeons (two attending, two clinical fellows) completed 224 unnavigated and 224 3D-navigated osteotomies on anatomical models according to preoperative 3D plans. The osteotomized bones were scanned and analyzed. Results Median distance from the virtual plan was 2.1 mm unnavigated (IQR 2.6 mm, ≥3 mm in 33%) and 1.2 mm 3D-navigated (IQR 1.1 mm, ≥3 mm in 6%) (P<0.0001); median pitch was 4.5° unnavigated (IQR 7.1°) and 3.5° 3D-navigated (IQR 4.0°) (P<0.0001); median roll was 7.4° unnavigated (IQR 8.5°) and 2.6° 3D-navigated (IQR 3.8°) (P<0.0001). Conclusion 3D-rendering enables osteotomy navigation. 3 mm is an appropriate planning distance. The next steps are translating virtual cutting guides to free bone flap reconstruction and clinical use. PMID:28249001

  14. Compensation of Environment and Motion Error for Accuracy Improvement of Ultra-Precision Lathe

    NASA Astrophysics Data System (ADS)

    Kwac, Lee-Ku; Kim, Jae-Yeol; Kim, Hong-Gun

    The technological manipulation of the piezo-electric actuator could compensate for the errors of the machining precision during the process of machining which lead to an elevation and enhancement in overall precisions. This manipulation is a very convenient method to advance the precision for nations without the solid knowledge of the ultra-precision machining technology. There were 2 divisions of researches conducted to develop the UPCU for precision enhancement of the current lathe and compensation for the environmental errors as shown below; The first research was designed to measure and real-time correct any deviations in variety of areas to achieve a compensation system through more effective optical fiber laser encoder than the encoder resolution which was currently used in the existing lathe. The deviations for a real-time correction were composed of followings; the surrounding air temperature, the thermal deviations of the machining materials, the thermal deviations in spindles, and the overall thermal deviation occurred due to the machine structure. The second research was to develop the UPCU and to improve the machining precision through the ultra-precision positioning and the real-time operative error compensation. The ultimate goal was to improve the machining precision of the existing lathe through completing the 2 research tasks mentioned above.

  15. Accuracy and precision of total mixed rations fed on commercial dairy farms.

    PubMed

    Sova, A D; LeBlanc, S J; McBride, B W; DeVries, T J

    2014-01-01

    Despite the significant time and effort spent formulating total mixed rations (TMR), it is evident that the ration delivered by the producer and that consumed by the cow may not accurately reflect that originally formulated. The objectives of this study were to (1) determine how TMR fed agrees with or differs from TMR formulation (accuracy), (2) determine daily variability in physical and chemical characteristics of TMR delivered (precision), and (3) investigate the relationship between daily variability in ration characteristics and group-average measures of productivity [dry matter intake (DMI), milk yield, milk components, efficiency, and feed sorting] on commercial dairy farms. Twenty-two commercial freestall herds were visited for 7 consecutive days in both summer and winter months. Fresh and refusal feed samples were collected daily to assess particle size distribution, dry matter, and chemical composition. Milk test data, including yield, fat, and protein were collected from a coinciding Dairy Herd Improvement test. Multivariable mixed-effect regression models were used to analyze associations between productivity measures and daily ration variability, measured as coefficient of variation (CV) over 7d. The average TMR [crude protein=16.5%, net energy for lactation (NEL) = 1.7 Mcal/kg, nonfiber carbohydrates = 41.3%, total digestible nutrients = 73.3%, neutral detergent fiber=31.3%, acid detergent fiber=20.5%, Ca = 0.92%, p=0.42%, Mg = 0.35%, K = 1.45%, Na = 0.41%] delivered exceeded TMR formulation for NEL (+0.05 Mcal/kg), nonfiber carbohydrates (+1.2%), acid detergent fiber (+0.7%), Ca (+0.08%), P (+0.02%), Mg (+0.02%), and K (+0.04%) and underfed crude protein (-0.4%), neutral detergent fiber (-0.6%), and Na (-0.1%). Dietary measures with high day-to-day CV were average feed refusal rate (CV = 74%), percent long particles (CV = 16%), percent medium particles (CV = 7.7%), percent short particles (CV = 6.1%), percent fine particles (CV = 13%), Ca (CV = 7

  16. Long-term accuracy and precision of PIXE and PIGE measurements for thin and thick sample analyses

    NASA Astrophysics Data System (ADS)

    Cohen, David D.; Siegele, Rainer; Orlic, Ivo; Stelcer, Ed

    2002-04-01

    This paper describes PIXE/PIGE measurements on thin Micromatter Standard (±5%) foils run over a period of 10 years. The selected foils were typically 50 μg/cm 2 thick and covered the commonly used PIXE X-ray energy range 1.4-20 keV and the light elements F and Na for PIGE studies. For the thousands of thick obsidian and pottery samples analysed over a 6-year period, the Ohio Red Clay standard has been used for both PIXE and PIGE calibration of a range of elements from Li to Rb. For PIXE, the long-term accuracy could be as low as ±1.6% for major elements with precision ranging from ±5% to ±10% depending on the elemental concentration. For PIGE, accuracies were around ±5% with precision ranging from ±5% in thick samples to ±15% in thin samples or for low yield γ-ray production.

  17. Using magnetic susceptibility to facilitate more rapid, reproducible and precise delineation of hydric soils in the midwestern USA

    USGS Publications Warehouse

    Grimley, D.A.; Arruda, N.K.; Bramstedt, M.W.

    2004-01-01

    Standard field indicators, currently used for hydric soil delineations [USDA-NRCS, 1998. Field indicators of hydric soils in the United States, Version 4.0. In: G.W. Hurt et al. (Ed.), United States Department of Agriculture-NRCS, Fort Worth, TX], are useful, but in some cases, they can be subjective, difficult to recognize, or time consuming to assess. Magnetic susceptibility (MS) measurements, acquired rapidly in the field with a portable meter, have great potential to help soil scientists delineate and map areas of hydric soils more precisely and objectively. At five sites in Illinois (from 5 to 15 ha in area) with contrasting soil types and glacial histories, the MS values of surface soils were measured along transects, and afterwards mapped and contoured. The MS values were found to be consistently higher in well-drained soils and lower in hydric soils, reflecting anaerobic deterioration of both detrital magnetite and soil-formed ferrimagnetics. At each site, volumetric MS values were statistically compared to field indicators to determine a critical MS value for hydric soil delineation. Such critical values range between 22??10-5 and 33??10-5 SI in silty loessal or alluvial soils in Illinois, but are as high as 61??10-5 SI at a site with fine sandy soil. A higher magnetite content and slower dissolution rate in sandy soils may explain the difference. Among sites with silty parent material, the lowest critical value (22??10-5 SI) occurs in soil with low pH (4.5-5.5) since acidic conditions are less favorable to ferrimagnetic mineral neoformation and enhance magnetite dissolution. Because of their sensitivity to parent material properties and soil pH, critical MS values must be determined on a site specific basis. The MS of studied soil samples (0-5 cm depth) is mainly controlled by neoformed ultrafine ferrimagnetics and detrital magnetite concentrations, with a minor contribution from anthropogenic fly ash. Neoformed ferrimagnetics are present in all samples

  18. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  19. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  20. Interproton distance determinations by NOE--surprising accuracy and precision in a rigid organic molecule.

    PubMed

    Butts, Craig P; Jones, Catharine R; Towers, Emma C; Flynn, Jennifer L; Appleby, Lara; Barron, Nicholas J

    2011-01-07

    The accuracy inherent in the measurement of interproton distances in small molecules by nuclear Overhauser enhancement (NOE) and rotational Overhauser enhancement (ROE) methods is investigated with the rigid model compound strychnine. The results suggest that interproton distances can be established with a remarkable level of accuracy, within a few percent of their true values, using a straight-forward data analysis method if experiments are conducted under conditions that support the initial rate approximation. Dealing with deviations from these conditions and other practical issues regarding these measurements are discussed.

  1. Meta-analysis of time perception and temporal processing in schizophrenia: Differential effects on precision and accuracy.

    PubMed

    Thoenes, Sven; Oberfeld, Daniel

    2017-03-29

    Numerous studies have reported that time perception and temporal processing are impaired in schizophrenia. In a meta-analytical review, we differentiate between time perception (judgments of time intervals) and basic temporal processing (e.g., judgments of temporal order) as well as between effects on accuracy (deviation of estimates from the veridical value) and precision (variability of judgments). In a meta-regression approach, we also included the specific tasks and the different time interval ranges as covariates. We considered 68 publications of the past 65years, and meta-analyzed data from 957 patients with schizophrenia and 1060 healthy control participants. Independent of tasks and interval durations, our results demonstrate that time perception and basic temporal processing are less precise (more variable) in patients (Hedges' g>1.00), whereas effects of schizophrenia on accuracy of time perception are rather small and task-dependent. Our review also shows that several aspects, e.g., potential influences of medication, have not yet been investigated in sufficient detail. In conclusion, the results are in accordance with theoretical assumptions and the notion of a more variable internal clock in patients with schizophrenia, but not with a strong effect of schizophrenia on clock speed. The impairment of temporal precision, however, may also be clock-unspecific as part of a general cognitive deficiency in schizophrenia.

  2. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  3. Event Clustering: Accuracy and Precision of Multiple Event Locations with Sparse Networks

    NASA Astrophysics Data System (ADS)

    Baldwin, T. K.; Wallace, T. C.

    2002-12-01

    In the last 15 years passive PASSCAL experiments have been fielded on every continent. Most of these deployments were designed to record teleseismic or large local seismic events to infer crustal and mantle structure. However, the deployments inevitably record small, local seismicity. Unfortunately, the configuration of the experiments are not optimal for location (typically the stations are arranged in linear arrays), and the seismicity is recorded at a very limited number of stations. The standard location procedure (Geiger's method) is severely limited without a detailed crustal model. A number of methods have been developed to improve relative location precision, including Joint Hypocenter Determination (JHD) and Progressive Multiple Event Location (PMEL). In this study we investigate the performance of PMEL for a very sparse network where there appears to be strong event clustering. CHARGE is a passive deployment of broadband seismometers in Chile and Argentina, with a primary focus of investigating the changes in dip along the descending Nazca Plate. The CHARGE stations recorded a large number of small, local events in 2000-2002. For this study events were selected from the northern profile (approximately along 30o S) in Chile. The events look similar, and appear to be clustered southeast of the city of La Serena. We performed three sets of experiments to investigate precision: (1) iterative Master Event Corrections to measure the scale length of clusters, (2) PMEL locations, and (3) PMEL locations using a cross-correlation to determine accurate relative phase timing. The analysis shows that for the PMEL experiment clusters must occupy an area of 600 km2 for the results to be consistent. We will present a method to estimate the precision errors based on bootstrapping. Charge Team: S. Beck, G. Zandt, M. Anderson, H. Folsom, R. Fromm, T. Shearer, L. Wagner, and P. Alvarado (all University of Arizona), J. Campos, E. Kausel, and J. Paredes (all University of

  4. [Studies on the accuracy and precision of total serum cholesterol in regional interlaboratory trials (author's transl)].

    PubMed

    Hohenwallner, W; Sommer, R; Wimmer, E

    1976-01-02

    The between-run precision of the Liebermann-Burchard reaction modified by Watson was, in our laboratory, 2-3%, the within-run coefficient of variation was 1-2%. The between-run precision of the enzymatic test was 3-4%, the within-run coefficient of variation was 3%. The regression analysis of 92 serum specimens from patients was y = -17.31 + 1.04 chi, the coefficient of regression was r = 0.996. Interlaboratory trials of serum cholesterol were studied in the normal and pathological range. Lyophilized samples of serum prepared commercially and from fresh specimens from patients were analysed by the method of Liebermann-Burchard as well as by the enzymatic procedure. Acceptable results estimated by Liebermann-Burchard were obtained in the different laboratories after using a common standard of cholesterol. The coefficient of variation of the enzymatic test in the interlaboratory trial was higher in comparison to the Liebermann-Burchard reaction. Methodological difficulties of the Liebermann-Burchard reaction are discussed and compared with the specific, enzymatic assay.

  5. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    PubMed Central

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods used in previous studies. In the OE and EM data collections, specific landmarks on the rigid bodies were determined using digitization. In the EM+CT and OE+CT data sets, the landmark locations were obtained from the CT images. 3D linear distances and 3D curvilinear path distances were calculated for the points. The accuracy and precision for all 4 methods were evaluated (EM, OE, EM+CT and OE+CT). In addition, results were compared with and without the CT imaging (EM vs. EM+CT, OE vs. OE+CT). All systems overestimated the actual 3D curvilinear path lengths. All systems also underestimated the actual rotation values. The accuracy of all methods was within 0.5 mm for 3D curvilinear path calculations, 0.05 mm for 3D linear distance calculations, and 0.2° for rotation calculations. In addition, Bland-Altman plots for each configuration of the systems suggest that measurements obtained from either system are repeatable and comparable. PMID:18617178

  6. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  7. Impact of improved models for precise orbits of altimetry satellites on the orbit accuracy and regional mean sea level trends

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Esselborn, Saskia; Dettmering, Denise; Schöne, Tilo; Neumayer, Karl-Hans

    2015-04-01

    Precise orbits of altimetry satellites are a prerequisite for investigations of global and regional sea level changes. We show a significant progress obtained in the recent decades in modeling and determination of the orbits of altimetry satellites. This progress was reached due to the improved knowledge of the Earth gravity field obtained by using CHAMP (CHAllenging Mini-Satellite Payload), GRACE (Gravity Recovery and Climate Experiment) and GOCE (Gravity field and Ocean Circulation Explorer) data, improved realizations of the terrestrial and celestial reference frames and transformations between these reference frames, improved modeling of ocean and solid Earth tides, improved accuracy of observations and other effects. New precise orbits of altimetry satellites ERS-1 (1991-1996), TOPEX/Poseidon (1992-2005), ERS-2 (1995-2006), Envisat (2002-2012) and Jason-1 (2002-2012) have been recently derived at the time intervals given within the DFG UHR-GravDat project and the ESA Climate Change Initiative Sea Level project using satellite laser ranging (SLR), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), Precise Range And Range-Rate Equipment (PRARE) and altimetry single-satellite crossover data (various observation types were used for various satellites). We show the current state of the orbit accuracy and the improvements obtained in the recent years. In particular, we demonstrate the impact of recently developed time-variable Earth gravity field models, improved tropospheric refraction models for DORIS observations, latest release 05 of the atmosphere-ocean dealiasing product (AOD1B) and some other models on the orbit accuracy of these altimetry satellites and regional mean sea level trends computed using these new orbit solutions.

  8. Note: electronic circuit for two-way time transfer via a single coaxial cable with picosecond accuracy and precision.

    PubMed

    Prochazka, Ivan; Kodet, Jan; Panek, Petr

    2012-11-01

    We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.

  9. A time projection chamber for high accuracy and precision fission cross-section measurements

    DOE PAGES

    Heffner, M.; Asner, D. M.; Baker, R. G.; ...

    2014-05-22

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This study provides a detailed description of the design requirements, the design solutions, and the initial performance ofmore » the fissionTPC.« less

  10. A time projection chamber for high accuracy and precision fission cross-section measurements

    SciTech Connect

    Heffner, M.; Asner, D. M.; Baker, R. G.; Baker, J.; Barrett, S.; Brune, C.; Bundgaard, J.; Burgett, E.; Carter, D.; Cunningham, M.; Deaven, J.; Duke, D. L.; Greife, U.; Grimes, S.; Hager, U.; Hertel, N.; Hill, T.; Isenhower, D.; Jewell, K.; King, J.; Klay, J. L.; Kleinrath, V.; Kornilov, N.; Kudo, R.; Laptev, A. B.; Leonard, M.; Loveland, W.; Massey, T. N.; McGrath, C.; Meharchand, R.; Montoya, L.; Pickle, N.; Qu, H.; Riot, V.; Ruz, J.; Sangiorgio, S.; Seilhan, B.; Sharma, S.; Snyder, L.; Stave, S.; Tatishvili, G.; Thornton, R. T.; Tovesson, F.; Towell, D.; Towell, R. S.; Watson, S.; Wendt, B.; Wood, L.; Yao, L.

    2014-05-22

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This study provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  11. A time projection chamber for high accuracy and precision fission cross-section measurements

    NASA Astrophysics Data System (ADS)

    Heffner, M.; Asner, D. M.; Baker, R. G.; Baker, J.; Barrett, S.; Brune, C.; Bundgaard, J.; Burgett, E.; Carter, D.; Cunningham, M.; Deaven, J.; Duke, D. L.; Greife, U.; Grimes, S.; Hager, U.; Hertel, N.; Hill, T.; Isenhower, D.; Jewell, K.; King, J.; Klay, J. L.; Kleinrath, V.; Kornilov, N.; Kudo, R.; Laptev, A. B.; Leonard, M.; Loveland, W.; Massey, T. N.; McGrath, C.; Meharchand, R.; Montoya, L.; Pickle, N.; Qu, H.; Riot, V.; Ruz, J.; Sangiorgio, S.; Seilhan, B.; Sharma, S.; Snyder, L.; Stave, S.; Tatishvili, G.; Thornton, R. T.; Tovesson, F.; Towell, D.; Towell, R. S.; Watson, S.; Wendt, B.; Wood, L.; Yao, L.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  12. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  13. Accuracy and precision of the i-STAT portable clinical analyzer: an analytical point of view.

    PubMed

    Pidetcha, P; Ornvichian, S; Chalachiva, S

    2000-04-01

    The introduction of a new point-of-care testing (POCT) instrument into the market affects medical practice and laboratory services. The i-STAT is designed to improve the speed in the decision making of the medical profession. However, reliability of results would ensure the quality of laboratory data. We, therefore, made an evaluation of the performance of i-STAT using a disposable cartridge EG7 + which is capable of measuring pH, pO2, pCO2 (blood gas), Sodium, Potassium (Electrolytes), Ionized calcium and Hematocrit with only 10 microliters of lithium heparinized blood in 2 minutes. The results were compared with those obtained from routine methods. The results were found to be accurate, precise and correlated with acceptable methods used routinely in the laboratory.

  14. Factors controlling precision and accuracy in isotope-ratio-monitoring mass spectrometry

    NASA Technical Reports Server (NTRS)

    Merritt, D. A.; Hayes, J. M.

    1994-01-01

    The performance of systems in which picomole quantities of sample are mixed with a carrier gas and passed through an isotope-ratio mass spectrometer system was examined experimentally and theoretically. Two different mass spectrometers were used, both having electron-impact ion sources and Faraday cup collector systems. One had an accelerating potential of 10kV and accepted 0.2 mL of He/min, producing, under those conditions, a maximum efficiency of 1 CO2 molecular ion collected per 700 molecules introduced. Comparable figures for the second instrument were 3 kV, 0.5 mL of He/min, and 14000 molecules/ion. Signal pathways were adjusted so that response times were <200 ms. Sample-related ion currents appeared as peaks with widths of 3-30 s. Isotope ratios were determined by comparison to signals produced by standard gases. In spite of rapid variations in signals, observed levels of performance were within a factor of 2 of shot-noise limits. For the 10-kV instrument, sample requirements for standard deviations of 0.1 and 0.5% were 45 and 1.7 pmol, respectively. Comparable requirements for the 3-kV instrument were 900 and 36 pmol. Drifts in instrumental characteristics were adequately neutralized when standards were observed at 20-min intervals. For the 10-kV instrument, computed isotopic compositions were independent of sample size and signal strength over the ranges examined. Nonlinearities of <0.04%/V were observed for the 3-kV system. Procedures for observation and subtraction of background ion currents were examined experimentally and theoretically. For sample/ background ratios varying from >10 to 0.3, precision is expected and observed to decrease approximately 2-fold and to depend only weakly on the precision with which background ion currents have been measured.

  15. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  16. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment

    PubMed Central

    Vu, An T.; Phillips, Jeffrey S.; Kay, Kendrick; Phillips, Matthew E.; Johnson, Matthew R.; Shinkareva, Svetlana V.; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2017-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  17. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  18. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  19. Onset-Duration Matching of Acoustic Stimuli Revisited: Conventional Arithmetic vs. Proposed Geometric Measures of Accuracy and Precision

    PubMed Central

    Friedrich, Björn; Heil, Peter

    2017-01-01

    Onsets of acoustic stimuli are salient transients and are relevant in humans for the perception of music and speech. Previous studies of onset-duration discrimination and matching focused on whether onsets are perceived categorically. In this study, we address two issues. First, we revisit onset-duration matching and measure, for 79 conditions, how accurately and precisely human listeners can adjust the onset duration of a comparison stimulus to subjectively match that of a standard stimulus. Second, we explore measures for quantifying performance in this and other matching tasks. The conventional measures of accuracy and precision are defined by arithmetic descriptive statistics and the Euclidean distance function on the real numbers. We propose novel measures based on geometric descriptive statistics and the log-ratio distance function, the Euclidean distance function on the positive-real numbers. Only these properly account for the fact that the magnitude of onset durations, like the magnitudes of most physical quantities, can attain only positive real values. The conventional (arithmetic) measures possess a convexity bias that yields errors that grow with the width of the distribution of matches. This convexity bias leads to misrepresentations of the constant error and could even imply the existence of perceptual illusions where none exist. This is not so for the proposed (geometric) measures. We collected up to 68 matches from a given listener for each condition (about 34,000 matches in total) and examined inter-listener variability and the effects of onset duration, plateau duration, sound level, carrier, and restriction of the range of adjustable comparison stimuli on measures of accuracy and precision. Results obtained with the conventional measures generally agree with those reported in the literature. The variance across listeners is highly heterogeneous for the conventional measures but is homogeneous for the proposed measures. Furthermore, the proposed

  20. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99

  1. Accuracy and precision of equine gait event detection during walking with limb and trunk mounted inertial sensors.

    PubMed

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of -7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with -0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of -4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely.

  2. Accuracy and precision of free-energy calculations via molecular simulation

    NASA Astrophysics Data System (ADS)

    Lu, Nandou

    A quantitative characterization of the methodologies of free-energy perturbation (FEP) calculations is presented, and optimal implementation of the methods for reliable and efficient calculation is addressed. Some common misunderstandings in the FEP calculations are corrected. The two opposite directions of FEP calculations are uniquely defined as generalized insertion and generalized deletion, according to the entropy change along the perturbation direction. These two calculations are not symmetric; they produce free-energy results differing systematically due to the different capability of each to sample the important phase-space in a finite-length simulation. The FEP calculation errors are quantified by characterizing the simulation sampling process with the help of probability density functions for the potential energy change. While the random error in the FEP calculation is analyzed with a probabilistic approach, the systematic error is characterized as the most-likely inaccuracy, which is modeled considering the poor sampling of low-probability energy distribution tails. Our analysis shows that the entropy difference between the perturbation systems plays a key role in determining the reliability of FEP results, and the perturbation should be carried out in the insertion direction in order to ensure a good sampling and thus a reliable calculation. Easy-to-use heuristics are developed to estimate the simulation errors, as well as the simulation length that ensures a certain accuracy level of the calculation. The fundamental understanding obtained is then applied to tackle the problem of multistage FEP optimization. We provide the first principle of optimal staging: For each substage FEP calculation, the higher entropy system should be used as the reference to govern the sampling, i.e., the calculation should be conducted in the generalized insertion direction for each stage of perturbation. To minimize the simulation error, intermediate states should be

  3. Minimally invasive measurement of cardiac output during surgery and critical care: a meta-analysis of accuracy and precision.

    PubMed

    Peyton, Philip J; Chong, Simon W

    2010-11-01

    When assessing the accuracy and precision of a new technique for cardiac output measurement, the commonly quoted criterion for acceptability of agreement with a reference standard is that the percentage error (95% limits of agreement/mean cardiac output) should be 30% or less. We reviewed published data on four different minimally invasive methods adapted for use during surgery and critical care: pulse contour techniques, esophageal Doppler, partial carbon dioxide rebreathing, and transthoracic bioimpedance, to assess their bias, precision, and percentage error in agreement with thermodilution. An English language literature search identified published papers since 2000 which examined the agreement in adult patients between bolus thermodilution and each method. For each method a meta-analysis was done using studies in which the first measurement point for each patient could be identified, to obtain a pooled mean bias, precision, and percentage error weighted according to the number of measurements in each study. Forty-seven studies were identified as suitable for inclusion: N studies, n measurements: mean weighted bias [precision, percentage error] were: pulse contour N = 24, n = 714: -0.00 l/min [1.22 l/min, 41.3%]; esophageal Doppler N = 2, n = 57: -0.77 l/min [1.07 l/min, 42.1%]; partial carbon dioxide rebreathing N = 8, n = 167: -0.05 l/min [1.12 l/min, 44.5%]; transthoracic bioimpedance N = 13, n = 435: -0.10 l/min [1.14 l/min, 42.9%]. None of the four methods has achieved agreement with bolus thermodilution which meets the expected 30% limits. The relevance in clinical practice of these arbitrary limits should be reassessed.

  4. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  5. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  6. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  7. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics.

  8. The Precision and Accuracy of Early Epoch of Reionization Foreground Models: Comparing MWA and PAPER 32-antenna Source Catalogs

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Bowman, Judd; Aguirre, James E.

    2013-05-01

    As observations of the Epoch of Reionization (EoR) in redshifted 21 cm emission begin, we assess the accuracy of the early catalog results from the Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison Wide-field Array (MWA). The MWA EoR approach derives much of its sensitivity from subtracting foregrounds to <1% precision, while the PAPER approach relies on the stability and symmetry of the primary beam. Both require an accurate flux calibration to set the amplitude of the measured power spectrum. The two instruments are very similar in resolution, sensitivity, sky coverage, and spectral range and have produced catalogs from nearly contemporaneous data. We use a Bayesian Markov Chain Monte Carlo fitting method to estimate that the two instruments are on the same flux scale to within 20% and find that the images are mostly in good agreement. We then investigate the source of the errors by comparing two overlapping MWA facets where we find that the differences are primarily related to an inaccurate model of the primary beam but also correlated errors in bright sources due to CLEAN. We conclude with suggestions for mitigating and better characterizing these effects.

  9. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  10. Bracketing method with certified reference materials for high precision and accuracy determination of trace cadmium in drinking water by Inductively Coupled Plasma - Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Ketrin, Rosi; Handayani, Eka Mardika; Komalasari, Isna

    2017-01-01

    Two significant parameters to evaluate the measurement results are known as precision and accuracy. Both are associated with indeterminate and determinate error, respectively, that normally happen in such spectrometric measurement method as Inductively Coupled Plasma - Mass Spectrometry (ICP-MS). These errors must be eliminated or suppressed to get high precision and accuracy of the method. Decreasing the errors thus increasing the precision and accuracy of the method. In this study, bracketing method using two-point standard calibration was proposed in order to suppress the indeterminate error caused by instrumental drift thus increasing the result precision, and applied for measuring cadmium in drinking water samples. Certified reference material of ERM CA011b-Hard drinking water UK-metals was used to know the determinate error or measurement bias. When bias is obtained, some corrections are needed to get the accurate measurement result. The result was compared to that by external calibration method.

  11. Toward High-precision Seismic Studies of White Dwarf Stars: Parametrization of the Core and Tests of Accuracy

    NASA Astrophysics Data System (ADS)

    Giammichele, N.; Charpinet, S.; Fontaine, G.; Brassard, P.

    2017-01-01

    We present a prescription for parametrizing the chemical profile in the core of white dwarfs in light of the recent discovery that pulsation modes may sometimes be deeply confined in some cool pulsating white dwarfs. Such modes may be used as unique probes of the complicated chemical stratification that results from several processes that occurred in previous evolutionary phases of intermediate-mass stars. This effort is part of our ongoing quest for more credible and realistic seismic models of white dwarfs using static, parametrized equilibrium structures. Inspired by successful techniques developed in design optimization fields (such as aerodynamics), we exploit Akima splines for the tracing of the chemical profile of oxygen (carbon) in the core of a white dwarf model. A series of tests are then presented to better seize the precision and significance of the results that can be obtained in an asteroseismological context. We also show that the new parametrization passes an essential basic test, as it successfully reproduces the chemical stratification of a full evolutionary model.

  12. An evaluation of the accuracy and precision of a stand-alone submersible continuous ruminal pH measurement system.

    PubMed

    Penner, G B; Beauchemin, K A; Mutsvangwa, T

    2006-06-01

    The objectives of this study were 1) to develop and evaluate the accuracy and precision of a new stand-alone submersible continuous ruminal pH measurement system called the Lethbridge Research Centre ruminal pH measurement system (LRCpH; Experiment 1); 2) to establish the accuracy and precision of a well-documented, previously used continuous indwelling ruminal pH system (CIpH) to ensure that the new system (LRCpH) was as accurate and precise as the previous system (CIpH; Experiment 2); and 3) to determine the required frequency for pH electrode standardization by comparing baseline millivolt readings of pH electrodes in pH buffers 4 and 7 after 0, 24, 48, and 72 h of ruminal incubation (Experiment 3). In Experiment 1, 6 pregnant Holstein heifers, 3 lactating, primiparous Holstein cows, and 2 Black Angus heifers were used. All experimental animals were fitted with permanent ruminal cannulas. In Experiment 2, the 3 cannulated, lactating, primiparous Holstein cows were used. In both experiments, ruminal pH was determined continuously using indwelling pH electrodes. Subsequently, mean pH values were then compared with ruminal pH values obtained using spot samples of ruminal fluid (MANpH) obtained at the same time. A correlation coefficient accounting for repeated measures was calculated and results were used to calculate the concordance correlation to examine the relationships between the LRCpH-derived values and MANpH, and the CIpH-derived values and MANpH. In Experiment 3, the 6 pregnant Holstein heifers were used along with 6 new submersible pH electrodes. In Experiments 1 and 2, the comparison of the LRCpH output (1- and 5-min averages) to MANpH had higher correlation coefficients after accounting for repeated measures (0.98 and 0.97 for 1- and 5-min averages, respectively) and concordance correlation coefficients (0.96 and 0.97 for 1- and 5-min averages, respectively) than the comparison of CIpH to MANpH (0.88 and 0.87, correlation coefficient and concordance

  13. Reproducibility blues.

    PubMed

    Pulverer, Bernd

    2015-11-12

    Research findings advance science only if they are significant, reliable and reproducible. Scientists and journals must publish robust data in a way that renders it optimally reproducible. Reproducibility has to be incentivized and supported by the research infrastructure but without dampening innovation.

  14. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  15. A comparative study of submicron particle sizing platforms: accuracy, precision and resolution analysis of polydisperse particle size distributions.

    PubMed

    Anderson, Will; Kozak, Darby; Coleman, Victoria A; Jämting, Åsa K; Trau, Matt

    2013-09-01

    The particle size distribution (PSD) of a polydisperse or multimodal system can often be difficult to obtain due to the inherent limitations in established measurement techniques. For this reason, the resolution, accuracy and precision of three new and one established, commercially available and fundamentally different particle size analysis platforms were compared by measuring both individual and a mixed sample of monodisperse, sub-micron (220, 330, and 410 nm - nominal modal size) polystyrene particles. The platforms compared were the qNano Tunable Resistive Pulse Sensor, Nanosight LM10 Particle Tracking Analysis System, the CPS Instruments's UHR24000 Disc Centrifuge, and the routinely used Malvern Zetasizer Nano ZS Dynamic Light Scattering system. All measurements were subjected to a peak detection algorithm so that the detected particle populations could be compared to 'reference' Transmission Electron Microscope measurements of the individual particle samples. Only the Tunable Resistive Pulse Sensor and Disc Centrifuge platforms provided the resolution required to resolve all three particle populations present in the mixed 'multimodal' particle sample. In contrast, the light scattering based Particle Tracking Analysis and Dynamic Light Scattering platforms were only able to detect a single population of particles corresponding to either the largest (410 nm) or smallest (220 nm) particles in the multimodal sample, respectively. When the particle sets were measured separately (monomodal) each platform was able to resolve and accurately obtain a mean particle size within 10% of the Transmission Electron Microscope reference values. However, the broadness of the PSD measured in the monomodal samples deviated greatly, with coefficients of variation being ~2-6-fold larger than the TEM measurements across all four platforms. The large variation in the PSDs obtained from these four, fundamentally different platforms, indicates that great care must still be taken in

  16. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing

    PubMed Central

    Maurer, Florian P.; Pfiffner, Tamara; Böttger, Erik C.; Furrer, Reinhard

    2015-01-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 108 CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of the

  17. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  18. Accuracy and precision of hind limb foot contact timings of horses determined using a pelvis-mounted inertial measurement unit.

    PubMed

    Starke, Sandra D; Witte, Thomas H; May, Stephen A; Pfau, Thilo

    2012-05-11

    Gait analysis using small sensor units is becoming increasingly popular in the clinical context. In order to segment continuous movement from a defined point of the stride cycle, knowledge about footfall timings is essential. We evaluated the accuracy and precision of foot contact timings of a defined limb determined using an inertial sensor mounted on the pelvis of ten horses during walk and trot at different speeds and in different directions. Foot contact was estimated from vertical velocity events occurring before maximum sensor roll towards the contralateral limb. Foot contact timings matched data from a synchronised hoof mounted accelerometer well when velocity minimum was used for walk (mean (SD) difference of 15 (18)ms across horses) and velocity zero-crossing for trot (mean (SD) difference from -4 (14) to 12 (7)ms depending on the condition). The stride segmentation method also remained robust when applied to movement data of hind limb lame horses. In future, this method may find application in segmenting overground sensor data of various species.

  19. A first investigation of accuracy, precision and sensitivity of phase-based x-ray dark-field imaging

    NASA Astrophysics Data System (ADS)

    Astolfo, Alberto; Endrizzi, Marco; Kallon, Gibril; Millard, Thomas P.; Vittoria, Fabio A.; Olivo, Alessandro

    2016-12-01

    In the last two decades, x-ray phase contrast imaging (XPCI) has attracted attention as a potentially significant improvement over widespread and established x-ray imaging. The key is its capability to access a new physical quantity (the ‘phase shift’), which can be complementary to x-ray absorption. One additional advantage of XPCI is its sensitivity to micro structural details through the refraction induced dark-field (DF). While DF is extensively mentioned and used for several applications, predicting the capability of an XPCI system to retrieve DF quantitatively is not straightforward. In this article, we evaluate the impact of different design options and algorithms on DF retrieval for the edge-illumination (EI) XPCI technique. Monte Carlo simulations, supported by experimental data, are used to measure the accuracy, precision and sensitivity of DF retrieval performed with several EI systems based on conventional x-ray sources. The introduced tools are easy to implement, and general enough to assess the DF performance of systems based on alternative (i.e. non-EI) XPCI approaches.

  20. Application of U-Pb ID-TIMS dating to the end-Triassic global crisis: testing the limits on precision and accuracy in a multidisciplinary whodunnit (Invited)

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Schaltegger, U.; Guex, J.; Bartolini, A.

    2010-12-01

    The ca. 201.4 Ma Triassic-Jurassic boundary is characterized by one of the most devastating mass-extinctions in Earth history, subsequent biologic radiation, rapid carbon cycle disturbances and enormous flood basalt volcanism (Central Atlantic Magmatic Province - CAMP). Considerable uncertainty remains regarding the temporal and causal relationship between these events though this link is important for understanding global environmental change under extreme stresses. We present ID-TIMS U-Pb zircon geochronology on volcanic ash beds from two marine sections that span the Triassic-Jurassic boundary and from the CAMP in North America. To compare the timing of the extinction with the onset of the CAMP, we assess the precision and accuracy of ID-TIMS U-Pb zircon geochronology by exploring random and systematic uncertainties, reproducibility, open-system behavior, and pre-eruptive crystallization of zircon. We find that U-Pb ID-TIMS dates on single zircons can be internally and externally reproducible at 0.05% of the age, consistent with recent experiments coordinated through the EARTHTIME network. Increased precision combined with methods alleviating Pb-loss in zircon reveals that these ash beds contain zircon that crystallized between 10^5 and 10^6 years prior to eruption. Mineral dates older than eruption ages are prone to affect all geochronologic methods and therefore new tools exploring this form of “geologic uncertainty” will lead to better time constraints for ash bed deposition. In an effort to understand zircon dates within the framework of a magmatic system, we analyzed zircon trace elements by solution ICPMS for the same volume of zircon dated by ID-TIMS. In one example we argue that zircon trace element patterns as a function of time result from a mix of xeno-, ante-, and autocrystic zircons in the ash bed, and approximate eruption age with the youngest zircon date. In a contrasting example from a suite of Cretaceous andesites, zircon trace elements

  1. Evaluation of the accuracy and precision of four intraoral scanners with 70% reduced inlay and four-unit bridge models of international standard.

    PubMed

    Uhm, Soo-Hyuk; Kim, Jae-Hong; Jiang, Heng Bo; Woo, Chang-Woo; Chang, Minho; Kim, Kyoung-Nam; Bae, Ji-Myung; Oh, Seunghan

    2017-01-31

    The aims of this study were to evaluate the feasibility of 70% reduced inlay and 4-unit bridge models of International Standard (ISO 12836) assessing the accuracy of laboratory scanners to measure the accuracy of intraoral scanner. Four intraoral scanners (CS3500, Trios, Omnicam, and Bluecam) and one laboratory scanner (Ceramill MAP400) were used in this study. The height, depth, length, and angle of the models were measured from thirty scanned stereolithography (STL) images. There were no statistically significant mean deviations in distance accuracy and precision values of scanned images, except the angulation values of the inlay and 4-unit bridge models. The relative errors of inlay model and 4-unit bridge models quantifying the accuracy and precision of obtained mean deviations were less than 0.023 and 0.021, respectively. Thus, inlay and 4-unit bridge models suggested by this study is expected to be feasible tools for testing intraoral scanners.

  2. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  3. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  4. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  5. Accuracy And Precision Of Algorithms To Determine The Extent Of Aquatic Plants: Empirical Scaling Of Spectral Indices Vs. Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Cheruiyot, E.; Menenti, M.; Gorte, B.; Mito, C.; Koenders, R.

    2013-12-01

    Assessing the accuracy of image classification results is an important but often neglected step. Accuracy information is necessary in assessing the reliability of map products, hence neglecting this step renders the products unusable. With a classified Landsat-7 TM image as reference, we assessed the accuracy of NDVI and linear spectral unmixing (LSU) in vegetation detection from 20 randomly selected MERIS sample pixels in the Winam Gulf section of Lake Victoria. We noted that though easy to compute, empirical scaling of NDVI is not suitable for quantitative estimation of vegetation cover as it is misleading and often omits useful information. LSU performed at 87% based on RMSE. For quick solutions, we propose the use of a conversion factor from NDVI to vegetation fractional abundance (FA). With this conversion which is 96% reliable, the resulting FA from our samples were classified at 84% accuracy, only 3% less than those directly computed using LSU.

  6. Validation Test Report for NFLUX PRE: Validation of Specific Humidity, Surface Air Temperature, and Wind Speed Precision and Accuracy for Assimilation into Global and Regional Models

    DTIC Science & Technology

    2014-04-02

    Test Report for NFLUX PRE: Validation of Specific Humidity, Surface Air Temperature, and Wind Speed Precision and Accuracy for Assimilation into...THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Validation Test Report for NFLUX PRE: Validation of Specific Humidity, Surface Air...The regional algorithm products overlay the existing global product estimate. The location of the observations is tested to see if it falls within one

  7. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  8. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes.

  9. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... diesel fuel, and ECA marine fuel? 80.584 Section 80.584 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Sampling and Testing § 80... sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision....

  10. Study of the Effect of Modes of Electroerosion Treatment on the Microstructure and Accuracy of Precision Sizes of Small Parts

    NASA Astrophysics Data System (ADS)

    Korobova, N. V.; Aksenenko, A. Yu.; Bashevskaya, O. S.; Nikitin, A. A.

    2016-01-01

    Results of a study of the effect of the parameters of electroerosion treatment in a GF Agie Charmilles CUT 1000 OilTech wire-cutting bench on the size accuracy, the quality of the surface layer of cuts, and the microstructure of the surface of the treated parts are presented.

  11. Assessment of the Precision and Reproducibility of Ventricular Volume, Function and Mass Measurements with Ferumoxytol-Enhanced 4D Flow MRI

    PubMed Central

    Hanneman, Kate; Kino, Aya; Cheng, Joseph Y; Alley, Marcus T; Vasanawala, Shreyas S

    2016-01-01

    Purpose To compare the precision and inter-observer agreement of ventricular volume, function and mass quantification by three-dimensional time-resolved (4D) flow MRI relative to cine steady state free precession (SSFP). Materials and Methods With research board approval, informed consent, and HIPAA compliance, 22 consecutive patients with congenital heart disease (CHD) (10 males, 6.4±4.8 years) referred for 3T ferumoxytol-enhanced cardiac MRI were prospectively recruited. Complete ventricular coverage with standard 2D short-axis cine SSFP and whole chest coverage with axial 4D flow were obtained. Two blinded radiologists independently segmented images for left ventricular (LV) and right ventricular (RV) myocardium at end systole (ES) and end diastole (ED). Statistical analysis included linear regression, ANOVA, Bland-Altman (BA) analysis, and intra-class correlation (ICC). Results Significant positive correlations were found between 4D flow and SSFP for ventricular volumes (r = 0.808–0.972, p<0.001), ejection fraction (EF) (r = 0.900–928, p<0.001), and mass (r = 0.884–0.934, p<0.001). BA relative limits of agreement for both ventricles were between −52% to 34% for volumes, −29% to 27% for EF, and −41% to 48% for mass, with wider limits of agreement for the RV compared to the LV. There was no significant difference between techniques with respect to mean square difference of ED-ES mass for either LV (F=2.05, p=0.159) or RV (F=0.625, p=0.434). Inter-observer agreement was moderate to good with both 4D flow (ICC 0.523–0.993) and SSFP (ICC 0.619–0.982), with overlapping confidence intervals. Conclusion Quantification of ventricular volume, function and mass can be accomplished with 4D flow MRI with precision and inter-observer agreement comparable to that of cine SSFP. PMID:26871420

  12. Reproducibility in a multiprocessor system

    SciTech Connect

    Bellofatto, Ralph A; Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Gooding, Thomas M; Haring, Rudolf A; Heidelberger, Philip; Kopcsay, Gerard V; Liebsch, Thomas A; Ohmacht, Martin; Reed, Don D; Senger, Robert M; Steinmacher-Burow, Burkhard; Sugawara, Yutaka

    2013-11-26

    Fixing a problem is usually greatly aided if the problem is reproducible. To ensure reproducibility of a multiprocessor system, the following aspects are proposed; a deterministic system start state, a single system clock, phase alignment of clocks in the system, system-wide synchronization events, reproducible execution of system components, deterministic chip interfaces, zero-impact communication with the system, precise stop of the system and a scan of the system state.

  13. Analysis of the accuracy and precision of the Axis-Shield Afinion hemoglobin A1c measurement device.

    PubMed

    Little, Randie R

    2012-03-01

    Point-of-care (POC) hemoglobin A1c measurement is now used by many physicians to make more timely decisions on therapy changes. A few studies have highlighted the drawbacks of some POC methods, e.g., poor precision and lot-to-lot variability. Evaluating performance in the clinical setting is difficult because there is minimal proficiency testing data on POC methods. In this issue of Journal of Diabetes Science and Technology, Wood and colleagues describe their experience with the Afinion method in a pediatric clinic network, comparing these results to another POC method as well as to a laboratory high-performance liquid chromatography method. Although they conclude that the Afinion exhibits adequate performance, they do not evaluate lot-to-lot variability. As with laboratory methods, potential assay interferences must also be considered.

  14. Quantitative Thin-Film X-ray Microanalysis by STEM/HAADF: Statistical Analysis for Precision and Accuracy Determination

    NASA Astrophysics Data System (ADS)

    Armigliato, Aldo; Balboni, Roberto; Rosa, Rodolfo

    2006-07-01

    Silicon-germanium thin films have been analyzed by EDS microanalysis in a field emission gun scanning transmission electron microscope (FEG-STEM) equipped with a high angular dark-field detector (STEM/HAADF). Several spectra have been acquired in the same homogeneous area of the cross-sectioned sample by drift-corrected linescan acquisitions. The Ge concentrations and the local film thickness have been obtained by using a previously described Monte Carlo based “two tilt angles” method. Although the concentrations are in excellent agreement with the known values, the resulting confidence intervals are not as good as expected from the precision in beam positioning and tilt angle position and readout offered by our state-of-the-art microscope. The Gaussian shape of the SiK[alpha] and GeK[alpha] X-ray intensities allows one to use the parametric bootstrap method of statistics, whereby it becomes possible to perform the same quantitative analysis in sample regions of different compositions and thicknesses, but by doing only one measurement at the two angles.

  15. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems.

  16. Anthropometric precision and accuracy of digital three-dimensional photogrammetry: comparing the Genex and 3dMD imaging systems with one another and with direct anthropometry.

    PubMed

    Weinberg, Seth M; Naidoo, Sybill; Govier, Daniel P; Martin, Rick A; Kane, Alex A; Marazita, Mary L

    2006-05-01

    A variety of commercially available three-dimensional (3D) surface imaging systems are currently in use by craniofacial specialists. Little is known, however, about how measurement data generated from alternative 3D systems compare, specifically in terms of accuracy and precision. The purpose of this study was to compare anthropometric measurements obtained by way of two different digital 3D photogrammetry systems (Genex and 3dMD) as well as direct anthropometry and to evaluate intraobserver precision across these three methods. On a sample of 18 mannequin heads, 12 linear distances were measured twice by each method. A two-factor repeated measures analysis of variance was used to test simultaneously for mean differences in precision across methods. Additional descriptive statistics (e.g., technical error of measurement [TEM]) were used to quantify measurement error magnitude. Statistically significant (P < 0.05) mean differences were observed across methods for nine anthropometric variables; however, the magnitude of these differences was consistently at the submillimeter level. No significant differences were noted for precision. Moreover, the magnitude of imprecision was determined to be very small, with TEM scores well under 1 mm, and intraclass correlation coefficients ranging from 0.98 to 1. Results indicate that overall mean differences across these three methods were small enough to be of little practical importance. In terms of intraobserver precision, all methods fared equally well. This study is the first attempt to simultaneously compare 3D surface imaging systems directly with one another and with traditional anthropometry. Results suggest that craniofacial surface data obtained by way of alternative 3D photogrammetric systems can be combined or compared statistically.

  17. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  18. Charts of operational process specifications ("OPSpecs charts") for assessing the precision, accuracy, and quality control needed to satisfy proficiency testing performance criteria.

    PubMed

    Westgard, J O

    1992-07-01

    "Operational process specifications" have been derived from an analytical quality-planning model to assess the precision, accuracy, and quality control (QC) needed to satisfy Proficiency Testing (PT) criteria. These routine operating specifications are presented in the form of an "OPSpecs chart," which describes the operational limits for imprecision and inaccuracy when a desired level of quality assurance is provided by a specific QC procedure. OPSpecs charts can be used to compare the operational limits for different QC procedures and to select a QC procedure that is appropriate for the precision and accuracy of a specific measurement procedure. To select a QC procedure, one plots the inaccuracy and imprecision observed for a measurement procedure on the OPSpecs chart to define the current operating point, which is then compared with the operational limits of candidate QC procedures. Any QC procedure whose operational limits are greater than the measurement procedure's operating point will provide a known assurance, with the percent chance specified by the OPSpecs chart, that critical analytical errors will be detected. OPSpecs charts for a 10% PT criterion are presented to illustrate the selection of QC procedures for measurement procedures with different amounts of imprecision and inaccuracy. Normalized OPSpecs charts are presented to permit a more general assessment of the analytical performance required with commonly used QC procedures.

  19. Precision and accuracy of ST-EDXRF performance for As determination comparing with ICP-MS and evaluation of As deviation in the soil media.

    PubMed

    Akbulut, Songul; Cevik, Ugur; Van, Aydın Ali; De Wael, Karolien; Van Grieken, Rene

    2014-02-01

    The present study was conducted to (i) determine the precision and accuracy of arsenic measurement in soil samples using ST-EDXRF by comparison with the results of ICP-MS analyses and (ii) identify the relationship of As concentration with soil characteristics. For the analysis of samples, inductively coupled plasma mass spectrometry (ICP-MS) and energy dispersive X-ray fluorescence spectrometry (EDXRF) were performed. According to the results found in the soil samples, the addition of HCl to HNO3, used for the digestion gave significant variations in the recovery of As. However, spectral interferences between peaks for As and Pb can affect detection limits and accuracy for XRF analysis. When comparing the XRF and ICP-MS results a correlation was observed with R(2)=0.8414. This means that using a ST-EDXRF spectrometer, it is possible to achieve accurate and precise analysis by the calibration of certified reference materials and choosing an appropriate secondary target. On the other hand, with regard to soil characteristics analyses, the study highlighted that As is mostly anthropogenically enriched in the studied area.

  20. TanDEM-X IDEM precision and accuracy assessment based on a large assembly of differential GNSS measurements in Kruger National Park, South Africa

    NASA Astrophysics Data System (ADS)

    Baade, J.; Schmullius, C.

    2016-09-01

    High resolution Digital Elevation Models (DEM) represent fundamental data for a wide range of Earth surface process studies. Over the past years, the German TanDEM-X mission acquired data for a new, truly global Digital Elevation Model with unprecedented geometric resolution, precision and accuracy. First TanDEM Intermediate Digital Elevation Models (i.e. IDEM) with a geometric resolution from 0.4 to 3 arcsec have been made available for scientific purposes in November 2014. This includes four 1° × 1° tiles covering the Kruger National Park in South Africa. Here, we document the results of a local scale IDEM height accuracy validation exercise utilizing over 10,000 RTK-GNSS-based ground survey points from fourteen sites characterized by mainly pristine Savanna vegetation. The vertical precision of the ground checkpoints is 0.02 m (1σ). Selected precursor data sets (SRTMGL1, SRTM41, ASTER-GDEM2) are included in the analysis to facilitate the comparison. Although IDEM represents an intermediate product on the way to the new global TanDEM-X DEM, expected to be released in late 2016, it allows first insight into the properties of the forthcoming product. Remarkably, the TanDEM-X tiles include a number of auxiliary files providing detailed information pertinent to a user-based quality assessment. We present examples for the utilization of this information in the framework of a local scale study including the identification of height readings contaminated by water. Furthermore, this study provides evidence for the high precision and accuracy of IDEM height readings and the sensitivity to canopy cover. For open terrain, the 0.4 arcsec resolution edition (IDEM04) yields an average bias of 0.20 ± 0.05 m (95% confidence interval, Cl95), a RMSE = 1.03 m and an absolute vertical height error (LE90) of 1.5 [1.4, 1.7] m (Cl95). The corresponding values for the lower resolution IDEM editions are about the same and provide evidence for the high quality of the IDEM products

  1. Evaluating precision and accuracy when quantifying different endogenous control reference genes in maize using real-time PCR.

    PubMed

    Scholdberg, Tandace A; Norden, Tim D; Nelson, Daishia D; Jenkins, G Ronald

    2009-04-08

    The agricultural biotechnology industry routinely utilizes real-time quantitative PCR (RT-qPCR) for the detection of biotechnology-derived traits in plant material, particularly for meeting the requirements of legislative mandates that rely upon the trace detection of DNA. Quantification via real-time RT-qPCR in plant species involves the measurement of the copy number of a taxon-specific, endogenous control gene exposed to the same manipulations as the target gene prior to amplification. The International Organization for Standardization (ISO 21570:2005) specifies that the copy number of an endogenous reference gene be used for normalizing the concentration (expressed as a % w/w) of a trait-specific target gene when using RT-qPCR. For this purpose, the copy number of a constitutively expressed endogenous reference gene in the same sample is routinely monitored. Real-time qPCR was employed to evaluate the predictability and performance of commonly used endogenous control genes (starch synthase, SSIIb-2, SSIIb-3; alcohol dehydrogenase, ADH; high-mobility group, HMG; zein; and invertase, IVR) used to detect biotechnology-derived traits in maize. The data revealed relatively accurate and precise amplification efficiencies when isogenic maize was compared to certified reference standards, but highly variable results when 23 nonisogenic maize cultivars were compared to an IRMM Bt-11 reference standard. Identifying the most suitable endogenous control gene, one that amplifies consistently and predictably across different maize cultivars, and implementing this as an internationally recognized standard would contribute toward harmonized testing of biotechnology-derived traits in maize.

  2. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  3. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  4. An evaluation of the accuracy and precision of methane prediction equations for beef cattle fed high-forage and high-grain diets.

    PubMed

    Escobar-Bahamondes, P; Oba, M; Beauchemin, K A

    2017-01-01

    The study determined the performance of equations to predict enteric methane (CH4) from beef cattle fed forage- and grain-based diets. Many equations are available to predict CH4 from beef cattle and the predictions vary substantially among equations. The aims were to (1) construct a database of CH4 emissions for beef cattle from published literature, and (2) identify the most precise and accurate extant CH4 prediction models for beef cattle fed diets varying in forage content. The database was comprised of treatment means of CH4 production from in vivo beef studies published from 2000 to 2015. Criteria to include data in the database were as follows: animal description, intakes, diet composition and CH4 production. In all, 54 published equations that predict CH4 production from diet composition were evaluated. Precision and accuracy of the equations were evaluated using the concordance correlation coefficient (r c ), root mean square prediction error (RMSPE), model efficiency and analysis of errors. Equations were ranked using a combined index of the various statistical assessments based on principal component analysis. The final database contained 53 studies and 207 treatment means that were divided into two data sets: diets containing ⩾400 g/kg dry matter (DM) forage (n=116) and diets containing ⩽200 g/kg DM forage (n=42). Diets containing between ⩽400 and ⩾200 g/kg DM forage were not included in the analysis because of their limited numbers (n=6). Outliers, treatment means where feed was fed restrictively and diets with CH4 mitigation additives were omitted (n=43). Using the high-forage dataset the best-fit equations were the International Panel on Climate Change Tier 2 method, 3 equations for steers that considered gross energy intake (GEI) and body weight and an equation that considered dry matter intake and starch:neutral detergent fiber with r c ranging from 0.60 to 0.73 and RMSPE from 35.6 to 45.9 g/day. For the high-grain diets, the 5 best

  5. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  6. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  7. Accuracy and precision of 88Sr/86Sr and 87Sr/86Sr measurements by MC-ICPMS compromised by high barium concentrations

    NASA Astrophysics Data System (ADS)

    Scher, Howie D.; Griffith, Elizabeth M.; Buckley, Wayne P.

    2014-02-01

    (BaSO4) is a widely distributed mineral that incorporates strontium (Sr) during formation. Mass-dependent fractionation of Sr isotopes occurs during abiotic precipitation of barite and formation of barite associated with biological processes (e.g., bacterial sulfide oxidation). Sr isotopes in barite can provide provenance information as well as potentially reconstruct sample formation conditions (e.g., saturation state, temperature, biotic versus abiotic). Incomplete separation of Ba from Sr has complicated measurements of Sr isotopes by MC-ICPMS. In this study, we tested the effects of Ba in Sr sample solutions and modified extraction chromatography of Sr using Eichrom Sr Spec (Eichrom Technologies LLC, USA) resin to enable rapid, accurate, and precise measurements of 88Sr/86Sr and 87Sr/86Sr ratios from Ba-rich matrices. Sr isotope ratios of sample solutions doped with Ba were statistically indistinguishable from Ba-free sample solutions below 1 ppm Ba. Deviations in both 87Sr/86Sr and δ88/86Sr occurred above 1 ppm Ba. An updated extraction chromatography method tested with barite and Ba-doped seawater produces Sr sample solutions containing 10-100 ppb levels of Ba. The practice of Zr spiking for external mass-discrimination correction of 88Sr/86Sr ratios was also evaluated, and it was confirmed that variable Zr levels do not have adverse effects on the accuracy and precision of 87Sr/86Sr ratios in the Zr concentration range required to produce accurate δ88/86Sr values.

  8. Functional limits of agreement applied as a novel method comparison tool for accuracy and precision of inertial measurement unit derived displacement of the distal limb in horses.

    PubMed

    Olsen, Emil; Pfau, Thilo; Ritz, Christian

    2013-09-03

    Over ground motion analysis in horses is limited by a small number of strides and restraints of the indoor gait laboratory. Inertial measurement units (IMUs) are transforming the knowledge of human motion and objective clinical assessment through the opportunity to obtain clinically relevant data under various conditions. When using IMUs on the limbs of horses to determine local position estimates, conditions with high dynamic range of both accelerations and rotational velocities prove particularly challenging. Here we apply traditional method agreement and suggest a novel method of functional data analysis to compare motion capture with IMUs placed over the fetlock joint in seven horses. We demonstrate acceptable accuracy and precision at less than or equal to 5% of the range of motion for detection of distal limb mounted cranio-caudal and vertical position. We do not recommend the use of the latero-medial position estimate of the distal metacarpus/metatarsus during walk where the average error is 10% and the maximum error 111% of the range. We also show that functional data analysis and functional limits of agreement are sensitive methods for comparison of cyclical data and could be applied to differentiate changes in gait for individuals across time and conditions.

  9. Performing elemental microanalysis with high accuracy and high precision by scanning electron microscopy/silicon drift detector energy-dispersive X-ray spectrometry (SEM/SDD-EDS).

    PubMed

    Newbury, Dale E; Ritchie, Nicholas W M

    Electron-excited X-ray microanalysis performed in the scanning electron microscope with energy-dispersive X-ray spectrometry (EDS) is a core technique for characterization of the microstructure of materials. The recent advances in EDS performance with the silicon drift detector (SDD) enable accuracy and precision equivalent to that of the high spectral resolution wavelength-dispersive spectrometer employed on the electron probe microanalyzer platform. SDD-EDS throughput, resolution, and stability provide practical operating conditions for measurement of high-count spectra that form the basis for peak fitting procedures that recover the characteristic peak intensities even for elemental combination where severe peak overlaps occur, such PbS, MoS2, BaTiO3, SrWO4, and WSi2. Accurate analyses are also demonstrated for interferences involving large concentration ratios: a major constituent on a minor constituent (Ba at 0.4299 mass fraction on Ti at 0.0180) and a major constituent on a trace constituent (Ba at 0.2194 on Ce at 0.00407; Si at 0.1145 on Ta at 0.0041). Accurate analyses of low atomic number elements, C, N, O, and F, are demonstrated. Measurement of trace constituents with limits of detection below 0.001 mass fraction (1000 ppm) is possible within a practical measurement time of 500 s.

  10. Using Global Analysis to Extend the Accuracy and Precision of Binding Measurements with T cell Receptors and Their Peptide/MHC Ligands

    PubMed Central

    Blevins, Sydney J.; Baker, Brian M.

    2017-01-01

    In cellular immunity, clonally distributed T cell receptors (TCRs) engage complexes of peptides bound to major histocompatibility complex proteins (pMHCs). In the interactions of TCRs with pMHCs, regions of restricted and variable diversity align in a structurally complex fashion. Many studies have used mutagenesis to attempt to understand the “roles” played by various interface components in determining TCR recognition properties such as specificity and cross-reactivity. However, these measurements are often complicated or even compromised by the weak affinities TCRs maintain toward pMHC. Here, we demonstrate how global analysis of multiple datasets can be used to significantly extend the accuracy and precision of such TCR binding experiments. Application of this approach should positively impact efforts to understand TCR recognition and facilitate the creation of mutational databases to help engineer TCRs with tuned molecular recognition properties. We also show how global analysis can be used to analyze double mutant cycles in TCR-pMHC interfaces, which can lead to new insights into immune recognition. PMID:28197404

  11. High-Precision Surface Inspection: Uncertainty Evaluation within an Accuracy Range of 15μm with Triangulation-based Laser Line Scanners

    NASA Astrophysics Data System (ADS)

    Dupuis, Jan; Kuhlmann, Heiner

    2014-06-01

    Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.

  12. Contextual sensitivity in scientific reproducibility.

    PubMed

    Van Bavel, Jay J; Mende-Siedlecki, Peter; Brady, William J; Reinero, Diego A

    2016-06-07

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher's degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed "hidden moderators") between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility.

  13. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  14. Results from a round-robin study assessing the precision and accuracy of LA-ICPMS U/Pb geochronology of zircon

    NASA Astrophysics Data System (ADS)

    Hanchar, J. M.

    2009-12-01

    A round-robin study was undertaken to assess the current state of precision and accuracy that can be achieved in LA-ICPMS U/Pb geochronology of zircon. The initial plan was to select abundant, well-characterized zircon samples to distribute to participants in the study. Three suitable samples were found, evaluated, and dated using ID-TIMS. Twenty-five laboratories in North America and Europe were asked to participate in the study. Eighteen laboratories agreed to participate, of which seventeen submitted final results. It was decided at the outset of the project that the identities of the participating researchers and laboratories not be revealed until the manuscript stemming from the project was completed. Participants were sent either fragments of zircon crystal or whole zircon crystals, selected randomly after being thoroughly mixed. Participants were asked to conform to specific requirements. These include providing all analytical conditions and equipment used, submission of all data acquired, and submitting their preferred data and preferred ages for the three samples. The participating researchers used a wide range of analytical methods (e.g., instrumentation, data reduction, error propagation) for the LA-ICPMS U/Th geochronology. These combined factors made it difficult for direct comparison of the results that were submitted. Most of the LA-ICPMS results submitted were within 2% r.s.d. of the ID-TIMS values for the three samples in the study. However, the error bars for the majority of the LA-ICPMS results for the three samples did not overlap with the ID-TIMS results. These results suggest a general underestimation of the errors calculated for the LA-ICPMS analyses U/Pb zircon analyses.

  15. The 1998-2000 SHADOZ (Southern Hemisphere ADditional OZonesondes) Tropical Ozone Climatology: Ozonesonde Precision, Accuracy and Station-to-Station Variability

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, Anne M.; McPeters, R. D.; Oltmans, S. J.; Schmidlin, F. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    As part of the SAFARI-2000 campaign, additional launches of ozonesondes were made at Irene, South Africa and at Lusaka, Zambia. These represent campaign augmentations to the SHADOZ database described in this paper. This network of 10 southern hemisphere tropical and subtropical stations, designated the Southern Hemisphere ADditional OZonesondes (SHADOZ) project and established from operational sites, provided over 1000 profiles from ozonesondes and radiosondes during the period 1998-2000. (Since that time, two more stations, one in southern Africa, have joined SHADOZ). Archived data are available at: http://code9l6.gsfc.nasa.gov/Data-services/shadoz>. Uncertainties and accuracies within the SHADOZ ozone data set are evaluated by analyzing: (1) imprecisions in stratospheric ozone profiles and in methods of extrapolating ozone above balloon burst; (2) comparisons of column-integrated total ozone from sondes with total ozone from the Earth-Probe/TOMS (Total Ozone Mapping Spectrometer) satellite and ground-based instruments; (3) possible biases from station-to-station due to variations in ozonesonde characteristics. The key results are: (1) Ozonesonde precision is 5%; (2) Integrated total ozone column amounts from the sondes are in good agreement (2-10%) with independent measurements from ground-based instruments at five SHADOZ sites and with overpass measurements from the TOMS satellite (version 7 data). (3) Systematic variations in TOMS-sonde offsets and in groundbased-sonde offsets from station to station reflect biases in sonde technique as well as in satellite retrieval. Discrepancies are present in both stratospheric and tropospheric ozone. (4) There is evidence for a zonal wave-one pattern in total and tropospheric ozone, but not in stratospheric ozone.

  16. Two dimensional assisted liquid chromatography - a chemometric approach to improve accuracy and precision of quantitation in liquid chromatography using 2D separation, dual detectors, and multivariate curve resolution.

    PubMed

    Cook, Daniel W; Rutan, Sarah C; Stoll, Dwight R; Carr, Peter W

    2015-02-15

    Comprehensive two-dimensional liquid chromatography (LC×LC) is rapidly evolving as the preferred method for the analysis of complex biological samples owing to its much greater resolving power compared to conventional one-dimensional (1D-LC). While its enhanced resolving power makes this method appealing, it has been shown that the precision of quantitation in LC×LC is generally not as good as that obtained with 1D-LC. The poorer quantitative performance of LC×LC is due to several factors including but not limited to the undersampling of the first dimension and the dilution of analytes during transit from the first dimension ((1)D) column to the second dimension ((2)D) column, and the larger relative background signals. A new strategy, 2D assisted liquid chromatography (2DALC), is presented here. 2DALC makes use of a diode array detector placed at the end of each column, producing both multivariate (1)D and two-dimensional (2D) chromatograms. The increased resolution of the analytes provided by the addition of a second dimension of separation enables the determination of analyte absorbance spectra from the (2)D detector signal that are relatively pure and can be used to initiate the treatment of data from the first dimension detector using multivariate curve resolution-alternating least squares (MCR-ALS). In this way, the approach leverages the strengths of both separation methods in a single analysis: the (2)D detector data is used to provide relatively pure analyte spectra to the MCR-ALS algorithm, and the final quantitative results are obtained from the resolved (1)D chromatograms, which has a much higher sampling rate and lower background signal than obtained in conventional single detector LC×LC, to obtain accurate and precise quantitative results. It is shown that 2DALC is superior to both single detector selective or comprehensive LC×LC and 1D-LC for quantitation of compounds that appear as severely overlapped peaks in the (1)D chromatogram - this is

  17. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  18. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  19. The Need for Reproducibility

    SciTech Connect

    Robey, Robert W.

    2016-06-27

    The purpose of this presentation is to consider issues of reproducibility, specifically it determines whether bitwise reproducible computation is possible, if computational research in DOE improves its publication process, and if reproducible results can be achieved apart from the peer review process?

  20. Contextual sensitivity in scientific reproducibility

    PubMed Central

    Van Bavel, Jay J.; Mende-Siedlecki, Peter; Brady, William J.; Reinero, Diego A.

    2016-01-01

    In recent years, scientists have paid increasing attention to reproducibility. For example, the Reproducibility Project, a large-scale replication attempt of 100 studies published in top psychology journals found that only 39% could be unambiguously reproduced. There is a growing consensus among scientists that the lack of reproducibility in psychology and other fields stems from various methodological factors, including low statistical power, researcher’s degrees of freedom, and an emphasis on publishing surprising positive results. However, there is a contentious debate about the extent to which failures to reproduce certain results might also reflect contextual differences (often termed “hidden moderators”) between the original research and the replication attempt. Although psychologists have found extensive evidence that contextual factors alter behavior, some have argued that context is unlikely to influence the results of direct replications precisely because these studies use the same methods as those used in the original research. To help resolve this debate, we recoded the 100 original studies from the Reproducibility Project on the extent to which the research topic of each study was contextually sensitive. Results suggested that the contextual sensitivity of the research topic was associated with replication success, even after statistically adjusting for several methodological characteristics (e.g., statistical power, effect size). The association between contextual sensitivity and replication success did not differ across psychological subdisciplines. These results suggest that researchers, replicators, and consumers should be mindful of contextual factors that might influence a psychological process. We offer several guidelines for dealing with contextual sensitivity in reproducibility. PMID:27217556

  1. Non-Destructive Assay (NDA) Uncertainties Impact on Physical Inventory Difference (ID) and Material Balance Determination: Sources of Error, Precision/Accuracy, and ID/Propagation of Error (POV)

    SciTech Connect

    Wendelberger, James G.

    2016-10-31

    These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.

  2. Accuracy and precision of pseudo-continuous arterial spin labeling perfusion during baseline and hypercapnia: a head-to-head comparison with ¹⁵O H₂O positron emission tomography.

    PubMed

    Heijtel, D F R; Mutsaerts, H J M M; Bakker, E; Schober, P; Stevens, M F; Petersen, E T; van Berckel, B N M; Majoie, C B L M; Booij, J; van Osch, M J P; Vanbavel, E; Boellaard, R; Lammertsma, A A; Nederveen, A J

    2014-05-15

    Measurements of the cerebral blood flow (CBF) and cerebrovascular reactivity (CVR) provide useful information about cerebrovascular condition and regional metabolism. Pseudo-continuous arterial spin labeling (pCASL) is a promising non-invasive MRI technique to quantitatively measure the CBF, whereas additional hypercapnic pCASL measurements are currently showing great promise to quantitatively assess the CVR. However, the introduction of pCASL at a larger scale awaits further evaluation of the exact accuracy and precision compared to the gold standard. (15)O H₂O positron emission tomography (PET) is currently regarded as the most accurate and precise method to quantitatively measure both CBF and CVR, though it is one of the more invasive methods as well. In this study we therefore assessed the accuracy and precision of quantitative pCASL-based CBF and CVR measurements by performing a head-to-head comparison with (15)O H₂O PET, based on quantitative CBF measurements during baseline and hypercapnia. We demonstrate that pCASL CBF imaging is accurate during both baseline and hypercapnia with respect to (15)O H₂O PET with a comparable precision. These results pave the way for quantitative usage of pCASL MRI in both clinical and research settings.

  3. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  4. Improving accuracy and precision of ice core δD (CH4) analyses using methane pre- and hydrogen post-pyrolysis trapping and subsequent chromatographic separation

    NASA Astrophysics Data System (ADS)

    Bock, M.; Schmitt, J.; Beck, J.; Schneider, R.; Fischer, H.

    2013-12-01

    Firn and polar ice cores offer the only direct paleoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD (CH4)) 0.5 to 1.5 kg of ice was previously necessary to achieve the required precision. Here we present a method to improve precision and reduce the sample amount for δD (CH4) measurements on (ice core) air. Pre-concentrated methane is focused before a high temperature oven (pre pyrolysis trapping), and molecular hydrogen formed by pyrolysis is trapped afterwards (post pyrolysis trapping), both on a carbon-PLOT capillary at -196 °C. A small amount of methane and krypton are trapped together with H2 and must be separated using a short second chromatographic column to ensure accurate results. Pre and post pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods we estimate a precision of 2.2‰ for 350 g of ice (or roughly 30 mL (at standard temperature and pressure (STP)) of air) with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4) of about 6 mL (STP) or about 500 pmol of pure CH4.

  5. Accuracy and precision of a new portable ultrasound scanner, the BME-150A, in residual urine volume measurement: a comparison with the BladderScan BVI 3000.

    PubMed

    Choe, Jin Ho; Lee, Ji Yeon; Lee, Kyu-Sung

    2007-06-01

    The objective of the study was to determine the relative accuracy of a new portable ultrasound unit, BME-150A, and the BladderScan BVI 3000, as assessed in comparison with the catheterized residual urine volume. We used both of these machines to prospectively measure the residual urine volumes of 89 patients (40 men and 49 women) who were undergoing urodynamic studies. The ultrasound measurements were compared with the post-scan bladder volumes obtained by catheterization in the same patients. The ultrasounds were followed immediately (within 5 min) by in-and-out catheterizations while the patients were in a supine position. There were a total of 116 paired measurements made. The BME-150A and the BVI 3000 demonstrated a correlation with the residual volume of 0.92 and 0.94, and a mean difference from the true residual volume of 7.8 and 3.6 ml, respectively. Intraclass correlation coefficients for the accuracy of the two bladder scans were 0.90 for BME-150A and 0.95 for BVI 3000. The difference of accuracy between the two models was not significant (p = 0.2421). There were six cases in which a follow-up evaluation of falsely elevated post-void residual urine volume measurements on the ultrasound studies resulted in comparatively low catheterized volumes, with a range of differences from 66 to 275.5 ml. These cases were diagnosed with an ovarian cyst, uterine myoma, or uterine adenomyosis on pelvic ultrasonography. The accuracy of the BME-150A is comparable to that of the BVI 3000 in estimating the true residual urine volumes and is sufficient enough for us to recommend its use as an alternative to catheterization.

  6. Improving accuracy and precision of ice core δD(CH4) analyses using methane pre-pyrolysis and hydrogen post-pyrolysis trapping and subsequent chromatographic separation

    NASA Astrophysics Data System (ADS)

    Bock, M.; Schmitt, J.; Beck, J.; Schneider, R.; Fischer, H.

    2014-07-01

    Firn and polar ice cores offer the only direct palaeoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD(CH4) or δ2H(CH4)) 0.5 to 1.5 kg of ice was hitherto used. Here we present a method to improve precision and reduce the sample amount for δD(CH4) measurements in (ice core) air. Pre-concentrated methane is focused in front of a high temperature oven (pre-pyrolysis trapping), and molecular hydrogen formed by pyrolysis is trapped afterwards (post-pyrolysis trapping), both on a carbon-PLOT capillary at -196 °C. Argon, oxygen, nitrogen, carbon monoxide, unpyrolysed methane and krypton are trapped together with H2 and must be separated using a second short, cooled chromatographic column to ensure accurate results. Pre- and post-pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods, we estimate a precision of 2.3‰ for 350 g of ice (or roughly 30 mL - at standard temperature and pressure (STP) - of air) with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4) of about 6 mL (STP) or about 500 pmol of pure CH4.

  7. Simultaneous Variable Flip Angle – Actual Flip Angle Imaging (VAFI) Method for Improved Accuracy and Precision of Three-dimensional T1 and B1 Measurements

    PubMed Central

    Hurley, Samuel A.; Yarnykh, Vasily L.; Johnson, Kevin M.; Field, Aaron S.; Alexander, Andrew L.; Samsonov, Alexey A.

    2011-01-01

    A new time-efficient and accurate technique for simultaneous mapping of T1 and B1 is proposed based on a combination of the Actual Flip angle Imaging (AFI) and Variable Flip Angle (VFA) methods: VAFI. VAFI utilizes a single AFI and one or more spoiled gradient-echo (SPGR) acquisitions with a simultaneous non-linear fitting procedure to yield accurate T1/B1 maps. The advantage of VAFI is high accuracy at either short T1 times or long TR in the AFI sequence. Simulations show this method is accurate to 0.03% in FA and 0.07% in T1 for TR/T1 times over the range of 0.01 to 0.45. We show for the case of brain imaging that it is sufficient to use only one small flip angle SPGR acquisition, which results in reduced spoiling requirements and a significant scan time reduction compared to the original VFA. In-vivo validation yielded high-quality 3D T1 maps and T1 measurements within 10% of previously published values, and within a clinically acceptable scan time. The VAFI method will increase the accuracy and clinical feasibility of many quantitative MRI methods requiring T1/B1 mapping such as DCE perfusion and quantitative MTI. PMID:22139819

  8. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  9. Accuracy and precision of reconstruction of complex refractive index in near-field single-distance propagation-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Gureyev, Timur; Mohammadi, Sara; Nesterets, Yakov; Dullin, Christian; Tromba, Giuliana

    2013-10-01

    We investigate the quantitative accuracy and noise sensitivity of reconstruction of the 3D distribution of complex refractive index, n(r)=1-δ(r)+iβ(r), in samples containing materials with different refractive indices using propagation-based phase-contrast computed tomography (PB-CT). Our present study is limited to the case of parallel-beam geometry with monochromatic synchrotron radiation, but can be readily extended to cone-beam CT and partially coherent polychromatic X-rays at least in the case of weakly absorbing samples. We demonstrate that, except for regions near the interfaces between distinct materials, the distribution of imaginary part of the refractive index, β(r), can be accurately reconstructed from a single projection image per view angle using phase retrieval based on the so-called homogeneous version of the Transport of Intensity equation (TIE-Hom) in combination with conventional CT reconstruction. In contrast, the accuracy of reconstruction of δ(r) depends strongly on the choice of the "regularization" parameter in TIE-Hom. We demonstrate by means of an instructive example that for some multi-material samples, a direct application of the TIE-Hom method in PB-CT produces qualitatively incorrect results for δ(r), which can be rectified either by collecting additional projection images at each view angle, or by utilising suitable a priori information about the sample. As a separate observation, we also show that, in agreement with previous reports, it is possible to significantly improve signal-to-noise ratio by increasing the sample-to-detector distance in combination with TIE-Hom phase retrieval in PB-CT compared to conventional ("contact") CT, with the maximum achievable gain of the order of 0.3δ /β. This can lead to improved image quality and/or reduction of the X-ray dose delivered to patients in medical imaging.

  10. Preliminary assessment of the accuracy and precision of TOPEX/POSEIDON altimeter data with respect to the large-scale ocean circulation

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1994-01-01

    TOPEX/POSEIDON sea surface height measurements are examined for quantitative consistency with known elements of the oceanic general circulation and its variability. Project-provided corrections were accepted but are at tested as part of the overall results. The ocean was treated as static over each 10-day repeat cycle and maps constructed of the absolute sea surface topography from simple averages in 2 deg x 2 deg bins. A hybrid geoid model formed from a combination of the recent Joint Gravity Model-2 and the project-provided Ohio State University geoid was used to estimate the absolute topography in each 10-day period. Results are examined in terms of the annual average, seasonal average, seasonal variations, and variations near the repeat period. Conclusion are as follows: the orbit error is now difficult to observe, having been reduced to a level at or below the level of other error sources; the geoid dominates the error budget of the estimates of the absolute topography; the estimated seasonal cycle is consistent with prior estimates; shorter-period variability is dominated on the largest scales by an oscillation near 50 days in spherical harmonics Y(sup m)(sub 1)(theta, lambda) with an amplitude near 10 cm, close to the simplest alias of the M(sub 2) tide. This spectral peak and others visible in the periodograms support the hypothesis that the largest remaining time-dependent errors lie in the tidal models. Though discrepancies attribute to the geoid are within the formal uncertainties of the good estimates, removal of them is urgent for circulation studies. Current gross accuracy of the TOPEX/POSEIDON mission is in the range of 5-10 cm, distributed overbroad band of frequencies and wavenumbers. In finite bands, accuracies approach the 1-cm level, and expected improvements arising from extended mission duration should reduce these numbers by nearly an order of magnitude.

  11. The Influence of External Loads on Movement Precision During Active Shoulder Internal Rotation Movements as Measured by 3 Indices of Accuracy

    PubMed Central

    Brindle, Timothy J; Uhl, Timothy L; Nitz, Arthur J; Shapiro, Robert

    2006-01-01

    Context: Using constant, variable, and absolute error to measure movement accuracy might provide a more complete description of joint position sense than any of these values alone. Objective: To determine the effect of loaded movements and type of feedback on shoulder joint position sense and movement velocity. Design: Applied study with repeated measures comparing type of feedback and the presence of a load. Setting: Laboratory. Patients or Other Participants: Twenty healthy subjects (age = 27.2 ± 3.3 years, height = 173.2 ± 18.1 cm, mass = 70.8 ± 14.5 kg) were seated with their arms in a custom shoulder wheel. Intervention(s): Subjects internally rotated 27° in the plane of the scapula, with either visual feedback provided by a video monitor or proprioceptive feedback provided by prior passive positioning, to a target at 48° of external rotation. Subjects performed the internal rotation movements with video feedback and proprioceptive feedback and with and without load (5% of body weight). Main Outcome Measure(s): High-speed motion analysis recorded peak rotational velocity and accuracy. Constant, variable, and absolute error for joint position sense was calculated from the final position. Results: Unloaded movements demonstrated significantly greater variable error than for loaded movements (2.0 ± 0.7° and 1.5 ± 0.4°, respectively) (P < .05), but there were no differences in constant or absolute error. Peak velocity was greater for movements with proprioceptive feedback (45.6 ± 2.9°/s) than visual feedback (39.1 ± 2.1°/s) and for unloaded (47.8 ± 3.6°/s) than loaded (36.9 ± 1.0°/s) movements (P < .05). Conclusions: Shoulder joint position sense demonstrated greater variable error unloaded versus loaded movements. Both visual feedback and additional loads decreased peak rotational velocity. PMID:16619096

  12. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  13. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  14. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  15. Reproducing Kernels in Harmonic Spaces and Their Numerical Implementation

    NASA Astrophysics Data System (ADS)

    Nesvadba, Otakar

    2010-05-01

    In harmonic analysis such as the modelling of the Earth's gravity field, the importance of Hilbert's space of harmonic functions with the reproducing kernel is often discussed. Moreover, in case of an unbounded domain given by the exterior of the sphere or an ellipsoid, the reproducing kernel K(x,y) can be expressed analytically by means of closed formulas or by infinite series. Nevertheless, the straightforward numerical implementation of these formulas leads to dozen of problems, which are mostly connected with the floating-point arithmetic and a number representation. The contribution discusses numerical instabilities in K(x,y) and gradK(x,y) that can be overcome by employing elementary functions, in particular expm1 and log1p. Suggested evaluation scheme for reproducing kernels offers uniform formulas within the whole solution domain as well as superior speed and near-perfect accuracy (10-16 for IEC 60559 double-precision numbers) when compared with the straightforward formulas. The formulas can be easily implemented on the majority of computer platforms, especially when C standard library ISO/IEC 9899:1999 is available.

  16. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  17. A reproducible method for determination of nitrocellulose in soil.

    PubMed

    Macmillan, Denise K; Majerus, Chelsea R; Laubscher, Randy D; Shannon, John P

    2008-01-15

    A reproducible analytical method for determination of nitrocellulose in soil is described. The new method provides the precision and accuracy needed for quantitation of nitrocellulose in soils to enable worker safety on contaminated sites. The method utilizes water and ethanol washes to remove co-contaminants, acetone extraction of nitrocellulose, and base hydrolysis of the extract to reduce nitrate groups. The hydrolysate is then neutralized and analyzed by ion chromatography for determination of free nitrate and nitrite. A variety of bases for hydrolysis and acids for neutralization were evaluated, with 5N sodium hydroxide and carbon dioxide giving the most complete hydrolysis and interference-free neutralization, respectively. The concentration of nitrocellulose in the soil is calculated from the concentrations of nitrate and nitrite and the weight percentage of nitrogen content in nitrocellulose. The laboratory detection limit for the analysis is 10mg/kg. The method acceptance range for recovery of nitrocellulose from control samples is 78-105%.

  18. Reproducibility in Chemical Research.

    PubMed

    Bergman, Robert G; Danheiser, Rick L

    2016-10-04

    "… To what extent is reproducibility a significant issue in chemical research? How can problems involving irreproducibility be minimized? … Researchers should be aware of the dangers of unconscious investigator bias, all papers should provide adequate experimental detail, and Reviewers have a responsibility to carefully examine papers for adequacy of experimental detail and support for the conclusions …" Read more in the Editorial by Robert G. Bergman and Rick L. Danheiser.

  19. Color accuracy and reproducibility in whole slide imaging scanners

    NASA Astrophysics Data System (ADS)

    Shrestha, Prarthana; Hulsken, Bas

    2014-03-01

    In this paper, we propose a work-flow for color reproduction in whole slide imaging (WSI) scanners such that the colors in the scanned images match to the actual slide color and the inter scanner variation is minimum. We describe a novel method of preparation and verification of the color phantom slide, consisting of a standard IT8- target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several ICC compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color-space. Based on the quality of color reproduction in histopathology tissue slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed work-ow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We measure objective color performance using CIE-DeltaE2000 metric, where DeltaE values below 1 is considered imperceptible. Our evaluation 14 phantom slides, manufactured according to the proposed method, show an average inter-slide color difference below 1 DeltaE. The proposed work-flow is implemented and evaluated in 35 Philips Ultra Fast Scanners (UFS). The results show that the average color difference between a scanner and the reference is 3.5 DeltaE, and among the scanners is 3.1 DeltaE. The improvement on color performance upon using the proposed method is apparent on the visual color quality of the tissues scans.

  20. Evaluation of the automated hematology analyzer Sysmex XT-2000iV™ compared to the ADVIA® 2120 for its use in dogs, cats, and horses: Part I--precision, linearity, and accuracy of complete blood cell count.

    PubMed

    Bauer, Natali; Nakagawa, Julia; Dunker, Cathrin; Failing, Klaus; Moritz, Andreas

    2011-11-01

    The automated laser-based hematology analyzer Sysmex XT-2000iV™ providing a complete blood cell count (CBC) and 5-part differential has been introduced in large veterinary laboratories. The aim of the current study was to determine precision, linearity, and accuracy of the Sysmex analyzer. Reference method for the accuracy study was the laser-based hematology analyzer ADVIA® 2120. For evaluation of accuracy, consecutive fresh blood samples from healthy and diseased cats (n = 216), dogs (n = 314), and horses (n = 174) were included. A low intra-assay coefficient of variation (CV) of approximately 1% was seen for the CBC except platelet count (PLT). An intra-assay CV ranging between 2% and 5.5% was evident for the differential count except for feline and equine monocytes (7.7%) and horse eosinophils (15.7%). Linearity was excellent for white blood cell count (WBC), hematocrit value, red blood cell count (RBC), and PLT. For all evaluated species, agreement was excellent for WBC and RBC, with Spearman rank correlation coefficients (r(s)) ranging from >0.99 to 0.98. Hematocrit value correlated excellently in cats and dogs, whereas for horses, a good correlation was evident. A good correlation between both analyzers was seen in feline and equine PLT (r(s) = 0.89 and 0.92, respectively), whereas correlation was excellent for dogs (r(s) = 0.93). Biases were close to 0 except for mean corpuscular hemoglobin concentration (4.11 to -7.25 mmol/l) and canine PLT (57 × 10(9)/l). Overall, the performance of the Sysmex analyzer was excellent and compared favorably with the ADVIA analyzer.

  1. Reproducible research in palaeomagnetism

    NASA Astrophysics Data System (ADS)

    Lurcock, Pontus; Florindo, Fabio

    2015-04-01

    The reproducibility of research findings is attracting increasing attention across all scientific disciplines. In palaeomagnetism as elsewhere, computer-based analysis techniques are becoming more commonplace, complex, and diverse. Analyses can often be difficult to reproduce from scratch, both for the original researchers and for others seeking to build on the work. We present a palaeomagnetic plotting and analysis program designed to make reproducibility easier. Part of the problem is the divide between interactive and scripted (batch) analysis programs. An interactive desktop program with a graphical interface is a powerful tool for exploring data and iteratively refining analyses, but usually cannot operate without human interaction. This makes it impossible to re-run an analysis automatically, or to integrate it into a larger automated scientific workflow - for example, a script to generate figures and tables for a paper. In some cases the parameters of the analysis process itself are not saved explicitly, making it hard to repeat or improve the analysis even with human interaction. Conversely, non-interactive batch tools can be controlled by pre-written scripts and configuration files, allowing an analysis to be 'replayed' automatically from the raw data. However, this advantage comes at the expense of exploratory capability: iteratively improving an analysis entails a time-consuming cycle of editing scripts, running them, and viewing the output. Batch tools also tend to require more computer expertise from their users. PuffinPlot is a palaeomagnetic plotting and analysis program which aims to bridge this gap. First released in 2012, it offers both an interactive, user-friendly desktop interface and a batch scripting interface, both making use of the same core library of palaeomagnetic functions. We present new improvements to the program that help to integrate the interactive and batch approaches, allowing an analysis to be interactively explored and refined

  2. Patent Law's Reproducibility Paradox.

    PubMed

    Sherkow, Jacob S

    2017-01-01

    Clinical research faces a reproducibility crisis. Many recent clinical and preclinical studies appear to be irreproducible--their results cannot be verified by outside researchers. This is problematic for not only scientific reasons but also legal ones: patents grounded in irreproducible research appear to fail their constitutional bargain of property rights in exchange for working disclosures of inventions. The culprit is likely patent law’s doctrine of enablement. Although the doctrine requires patents to enable others to make and use their claimed inventions, current difficulties in applying the doctrine hamper or even actively dissuade reproducible data in patents. This Article assesses the difficulties in reconciling these basic goals of scientific research and patent law. More concretely, it provides several examples of irreproducibility in patents on blockbuster drugs--Prempro, Xigris, Plavix, and Avastin--and discusses some of the social costs of the misalignment between good clinical practice and patent doctrine. Ultimately, this analysis illuminates several current debates concerning innovation policy. It strongly suggests that a proper conception of enablement should take into account after-arising evidence. It also sheds light on the true purpose--and limits--of patent disclosure. And lastly, it untangles the doctrines of enablement and utility.

  3. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  4. Opening Reproducible Research

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel; Konkol, Markus; Pebesma, Edzer; Kray, Christian; Klötgen, Stephanie; Schutzeichel, Marc; Lorenz, Jörg; Przibytzin, Holger; Kussmann, Dirk

    2016-04-01

    Open access is not only a form of publishing such that research papers become available to the large public free of charge, it also refers to a trend in science that the act of doing research becomes more open and transparent. When science transforms to open access we not only mean access to papers, research data being collected, or data being generated, but also access to the data used and the procedures carried out in the research paper. Increasingly, scientific results are generated by numerical manipulation of data that were already collected, and may involve simulation experiments that are completely carried out computationally. Reproducibility of research findings, the ability to repeat experimental procedures and confirm previously found results, is at the heart of the scientific method (Pebesma, Nüst and Bivand, 2012). As opposed to the collection of experimental data in labs or nature, computational experiments lend themselves very well for reproduction. Some of the reasons why scientists do not publish data and computational procedures that allow reproduction will be hard to change, e.g. privacy concerns in the data, fear for embarrassment or of losing a competitive advantage. Others reasons however involve technical aspects, and include the lack of standard procedures to publish such information and the lack of benefits after publishing them. We aim to resolve these two technical aspects. We propose a system that supports the evolution of scientific publications from static papers into dynamic, executable research documents. The DFG-funded experimental project Opening Reproducible Research (ORR) aims for the main aspects of open access, by improving the exchange of, by facilitating productive access to, and by simplifying reuse of research results that are published over the Internet. Central to the project is a new form for creating and providing research results, the executable research compendium (ERC), which not only enables third parties to

  5. Precise, reproducible nano-domain engineering in lithium niobate crystals

    SciTech Connect

    Boes, Andreas Sivan, Vijay; Ren, Guanghui; Yudistira, Didit; Mitchell, Arnan; Mailis, Sakellaris; Soergel, Elisabeth

    2015-07-13

    We present a technique for domain engineering the surface of lithium niobate crystals with features as small as 100 nm. A film of chromium (Cr) is deposited on the lithium niobate surface and patterned using electron beam lithography and lift-off and then irradiated with a wide diameter beam of intense visible laser light. The regions patterned with chromium are domain inverted while the uncoated regions are not affected by the irradiation. With the ability to realize nanoscale surface domains, this technique could offer an avenue for fabrication of nano-photonic and phononic devices.

  6. Short communication: Intraoperator repeatability and interoperator reproducibility of devices measuring teat dimensions in dairy cows.

    PubMed

    Zwertvaegher, I; De Vliegher, S; Baert, J; Van Weyenberg, S

    2013-01-01

    Various methods have been applied to measure teat dimensions. However, the accuracy and precision needed to obtain reliable results are often poor or have not yet been investigated. To determine the precision of the ruler, the caliper, and a recently developed 2-dimensional (2D) vision-based measuring device under field conditions, for respectively teat length, teat diameter, and both teat length and diameter, 2 experiments were conducted in which the consistency of measurements within operators (repeatability) and between operators (reproducibility) was tested. In addition, the agreement of the 2D device with the ruler and the caliper was studied. Although the ruler and the 2D device poorly agreed, both methods were precise in measuring teat length when the operators had experience in working with cows. The caliper was repeatable in measuring teat diameter, but was not reproducible. The 2D device was also repeatable in measuring teat diameter, and reproducible when the operators had experience with the device. The methods had poor agreement, most likely due to the operator-dependent pressure applied by the caliper. Because the 2D device has the advantage of measuring both teat length and teat diameters in a single measurement and is accurate and practical, this method allows efficient and fast collection of data on a large scale for various applications.

  7. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy.

    PubMed

    Akintola, Abimbola A; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93-56.45%) and within (range 0.75-9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was -2.8, -1.0, and -0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed at

  8. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy

    PubMed Central

    Akintola, Abimbola A.; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C.; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93–56.45%) and within (range 0.75–9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was −2.8, −1.0, and −0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed

  9. Optimetrics for Precise Navigation

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Heckler, Gregory; Gramling, Cheryl

    2017-01-01

    Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.

  10. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  11. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  12. Precision electron polarimetry

    SciTech Connect

    Chudakov, E.

    2013-11-07

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  13. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  14. Data Identifiers and Citations Enable Reproducible Science

    NASA Astrophysics Data System (ADS)

    Tilmes, C.

    2011-12-01

    Modern science often involves data processing with tremendous volumes of data. Keeping track of that data has been a growing challenge for data center. Researchers who access and use that data don't always reference and cite their data sources adequately for consumers of their research to follow their methodology or reproduce their analyses or experiments. Recent research has led to recommendations for good identifiers and citations that can help address this problem. This paper will describe some of the best practices in data identifiers, reference and citation. Using a simplified example scenario based on a long term remote sensing satellite mission, it will explore issues in identifying dynamic data sets and the importance of good data citations for reproducibility. It will describe the difference between granule and collection level identifiers, using UUIDs and DOIs to illustrate some recommendations for developing identifiers and assigning them during data processing. As data processors create data products, the provenance of the input products and precise steps that led to their creation are recorded and published for users of the data to see. As researchers access the data from an archive, they can use the provenance to help understand the genesis of the data, which could have effects on their usage of the data. By citing the data on publishing their research, others can retrieve the precise data used in their research and reproduce the analyses and experiments to confirm the results. Describing the experiment to a sufficient extent to reproduce the research enforces a formal approach that lends credibility to the results, and ultimately, to the policies of decision makers depending on that research.

  15. Reproducibility of 3D chromatin configuration reconstructions

    PubMed Central

    Segal, Mark R.; Xiong, Hao; Capurso, Daniel; Vazquez, Mariel; Arsuaga, Javier

    2014-01-01

    It is widely recognized that the three-dimensional (3D) architecture of eukaryotic chromatin plays an important role in processes such as gene regulation and cancer-driving gene fusions. Observing or inferring this 3D structure at even modest resolutions had been problematic, since genomes are highly condensed and traditional assays are coarse. However, recently devised high-throughput molecular techniques have changed this situation. Notably, the development of a suite of chromatin conformation capture (CCC) assays has enabled elicitation of contacts—spatially close chromosomal loci—which have provided insights into chromatin architecture. Most analysis of CCC data has focused on the contact level, with less effort directed toward obtaining 3D reconstructions and evaluating the accuracy and reproducibility thereof. While questions of accuracy must be addressed experimentally, questions of reproducibility can be addressed statistically—the purpose of this paper. We use a constrained optimization technique to reconstruct chromatin configurations for a number of closely related yeast datasets and assess reproducibility using four metrics that measure the distance between 3D configurations. The first of these, Procrustes fitting, measures configuration closeness after applying reflection, rotation, translation, and scaling-based alignment of the structures. The others base comparisons on the within-configuration inter-point distance matrix. Inferential results for these metrics rely on suitable permutation approaches. Results indicate that distance matrix-based approaches are preferable to Procrustes analysis, not because of the metrics per se but rather on account of the ability to customize permutation schemes to handle within-chromosome contiguity. It has recently been emphasized that the use of constrained optimization approaches to 3D architecture reconstruction are prone to being trapped in local minima. Our methods of reproducibility assessment provide a

  16. Application of AFINCH as a tool for evaluating the effects of streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the southeast Lake Michigan hydrologic subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations.  Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the

  17. Open Science and Research Reproducibility

    PubMed Central

    Munafò, Marcus

    2016-01-01

    Many scientists, journals and funders are concerned about the low reproducibility of many scientific findings. One approach that may serve to improve the reliability and robustness of research is open science. Here I argue that the process of pre-registering study protocols, sharing study materials and data, and posting preprints of manuscripts may serve to improve quality control procedures at every stage of the research pipeline, and in turn improve the reproducibility of published work. PMID:27350794

  18. Precision Measurement.

    ERIC Educational Resources Information Center

    Radius, Marcie; And Others

    The manual provides information for precision measurement (counting of movements per minute of a chosen activity) of achievement in special education students. Initial sections give guidelines for the teacher, parent, and student to follow for various methods of charting behavior. It is explained that precision measurement is a way to measure the…

  19. Tissue Doppler imaging reproducibility during exercise.

    PubMed

    Bougault, V; Nottin, S; Noltin, S; Doucende, G; Obert, P

    2008-05-01

    Tissue Doppler imaging (TDI) is an echocardiographic technique used during exercising to improve the accuracy of a cardiovascular diagnostic. The validity of TDI requires its reproducibility, which has never been challenged during moderate to maximal intensity exercising. The present study was specifically designed to assess the transmitral Doppler and pulsed TDI reproducibility in 19 healthy men, who had undergone two identical semi-supine maximal exercise tests on a cycle ergometer. Systolic (S') and diastolic (E') tissue velocities at the septal and lateral walls as well as early transmitral velocities (E) were assessed during exercise up to maximal effort. The data were compared between the two tests at 40 %, 60 %, 80 % and 100 % of maximal aerobic power. Despite upper body movements and hyperventilation, good quality echocardiographic images were obtained in each case. Regardless of exercise intensity, no differences were noticed between the two tests for all measurements. The variation coefficients for Doppler variables ranged from 3 % to 9 % over the transition from rest to maximal exercise. The random measurement error was, on average, 5.8 cm/s for E' and 4.4 cm/s for S'. Overall, the reproducibility of TDI was acceptable. Tissue Doppler imaging can be used to accurately evaluate LV diastolic and/or systolic function for this range of exercise intensity.

  20. Towards Reproducibility in Computational Hydrology

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    The ability to reproduce published scientific findings is a foundational principle of scientific research. Independent observation helps to verify the legitimacy of individual findings; build upon sound observations so that we can evolve hypotheses (and models) of how catchments function; and move them from specific circumstances to more general theory. The rise of computational research has brought increased focus on the issue of reproducibility across the broader scientific literature. This is because publications based on computational research typically do not contain sufficient information to enable the results to be reproduced, and therefore verified. Given the rise of computational analysis in hydrology over the past 30 years, to what extent is reproducibility, or a lack thereof, a problem in hydrology? Whilst much hydrological code is accessible, the actual code and workflow that produced and therefore documents the provenance of published scientific findings, is rarely available. We argue that in order to advance and make more robust the process of hypothesis testing and knowledge creation within the computational hydrological community, we need to build on from existing open data initiatives and adopt common standards and infrastructures to: first make code re-useable and easy to find through consistent use of metadata; second, publish well documented workflows that combine re-useable code together with data to enable published scientific findings to be reproduced; finally, use unique persistent identifiers (e.g. DOIs) to reference re-useable and reproducible code, thereby clearly showing the provenance of published scientific findings. Whilst extra effort is require to make work reproducible, there are benefits to both the individual and the broader community in doing so, which will improve the credibility of the science in the face of the need for societies to adapt to changing hydrological environments.

  1. Precision Medicine

    PubMed Central

    Cholerton, Brenna; Larson, Eric B.; Quinn, Joseph F.; Zabetian, Cyrus P.; Mata, Ignacio F.; Keene, C. Dirk; Flanagan, Margaret; Crane, Paul K.; Grabowski, Thomas J.; Montine, Kathleen S.; Montine, Thomas J.

    2017-01-01

    Three key elements to precision medicine are stratification by risk, detection of pathophysiological processes as early as possible (even before clinical presentation), and alignment of mechanism of action of intervention(s) with an individual's molecular driver(s) of disease. Used for decades in the management of some rare diseases and now gaining broad currency in cancer care, a precision medicine approach is beginning to be adapted to cognitive impairment and dementia. This review focuses on the application of precision medicine to address the clinical and biological complexity of two common neurodegenerative causes of dementia: Alzheimer disease and Parkinson disease. PMID:26724389

  2. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  3. Precision optical navigation guidance system

    NASA Astrophysics Data System (ADS)

    Starodubov, D.; McCormick, K.; Nolan, P.; Johnson, D.; Dellosa, M.; Volfson, L.; Fallahpour, A.; Willner, A.

    2016-05-01

    We present the new precision optical navigation guidance system approach that provides continuous, high quality range and bearing data to fixed wing aircraft during landing approach to an aircraft carrier. The system uses infrared optical communications to measure range between ship and aircraft with accuracy and precision better than 1 meter at ranges more than 7.5 km. The innovative receiver design measures bearing from aircraft to ship with accuracy and precision better than 0.5 mRad. The system provides real-time range and bearing updates to multiple aircraft at rates up to several kHz, and duplex data transmission between ship and aircraft.

  4. Rotary head type reproducing apparatus

    DOEpatents

    Takayama, Nobutoshi; Edakubo, Hiroo; Kozuki, Susumu; Takei, Masahiro; Nagasawa, Kenichi

    1986-01-01

    In an apparatus of the kind arranged to reproduce, with a plurality of rotary heads, an information signal from a record bearing medium having many recording tracks which are parallel to each other with the information signal recorded therein and with a plurality of different pilot signals of different frequencies also recorded one by one, one in each of the recording tracks, a plurality of different reference signals of different frequencies are simultaneously generated. A tracking error is detected by using the different reference signals together with the pilot signals which are included in signals reproduced from the plurality of rotary heads.

  5. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  6. Reproducible Bioinformatics Research for Biologists

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  7. Reproducible research in computational science.

    PubMed

    Peng, Roger D

    2011-12-02

    Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.

  8. Reproducibility of UAV-based earth topography reconstructions based on Structure-from-Motion algorithms

    NASA Astrophysics Data System (ADS)

    Clapuyt, Francois; Vanacker, Veerle; Van Oost, Kristof

    2016-05-01

    Combination of UAV-based aerial pictures and Structure-from-Motion (SfM) algorithm provides an efficient, low-cost and rapid framework for remote sensing and monitoring of dynamic natural environments. This methodology is particularly suitable for repeated topographic surveys in remote or poorly accessible areas. However, temporal analysis of landform topography requires high accuracy of measurements and reproducibility of the methodology as differencing of digital surface models leads to error propagation. In order to assess the repeatability of the SfM technique, we surveyed a study area characterized by gentle topography with an UAV platform equipped with a standard reflex camera, and varied the focal length of the camera and location of georeferencing targets between flights. Comparison of different SfM-derived topography datasets shows that precision of measurements is in the order of centimetres for identical replications which highlights the excellent performance of the SfM workflow, all parameters being equal. The precision is one order of magnitude higher for 3D topographic reconstructions involving independent sets of ground control points, which results from the fact that the accuracy of the localisation of ground control points strongly propagates into final results.

  9. Precision metrology.

    PubMed

    Jiang, X; Whitehouse, D J

    2012-08-28

    This article is a summary of the Satellite Meeting, which followed on from the Discussion Meeting at the Royal Society on 'Ultra-precision engineering: from physics to manufacture', held at the Kavli Royal Society International Centre, Chicheley Hall, Buckinghamshire, UK. The meeting was restricted to 18 invited experts in various aspects of precision metrology from academics from the UK and Sweden, Government Institutes from the UK and Germany and global aerospace industries. It examined and identified metrology problem areas that are, or may be, limiting future developments in precision engineering and, in particular, metrology. The Satellite Meeting was intended to produce a vision that will inspire academia and industry to address the solutions of those open-ended problems identified. The discussion covered three areas, namely the function of engineering parts, their measurement and their manufacture, as well as their interactions.

  10. Regional cerebral blood flow utilizing the gamma camera and xenon inhalation: reproducibility and clinical applications

    SciTech Connect

    Fox, R.A.; Knuckey, N.W.; Fleay, R.F.; Stokes, B.A.; Van der Schaaf, A.; Surveyor, I.

    1985-11-01

    A modified collimator and standard gamma camera have been used to measure regional cerebral blood flow following inhalation of radioactive xenon. The collimator and a simplified analysis technique enables excellent statistical accuracy to be achieved with acceptable precision in the measurement of grey matter blood flow. The validity of the analysis was supported by computer modelling and patient measurements. Sixty-one patients with subarachnoid hemorrhage, cerebrovascular disease or dementia were retested to determine the reproducibility of our method. The measured coefficient of variation was 6.5%. Of forty-six patients who had a proven subarachnoid hemorrhage, 15 subsequently developed cerebral ischaemia. These showed a CBF of 42 +/- 6 ml X minute-1 X 100 g brain-1 compared with 49 +/- 11 ml X minute-1 X 100 g brain-1 for the remainder. There is evidence that decreasing blood flow and low initial flow correlate with the subsequent onset of cerebral ischemia.

  11. Reproducibility of airway wall thickness measurements

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael; Kuhnigk, Jan-Martin; Krass, Stefan; Owsijewitsch, Michael; de Hoop, Bartjan; Peitgen, Heinz-Otto

    2010-03-01

    Airway remodeling and accompanying changes in wall thickness are known to be a major symptom of chronic obstructive pulmonary disease (COPD), associated with reduced lung function in diseased individuals. Further investigation of this disease as well as monitoring of disease progression and treatment effect demand for accurate and reproducible assessment of airway wall thickness in CT datasets. With wall thicknesses in the sub-millimeter range, this task remains challenging even with today's high resolution CT datasets. To provide accurate measurements, taking partial volume effects into account is mandatory. The Full-Width-at-Half-Maximum (FWHM) method has been shown to be inappropriate for small airways1,2 and several improved algorithms for objective quantification of airway wall thickness have been proposed.1-8 In this paper, we describe an algorithm based on a closed form solution proposed by Weinheimer et al.7 We locally estimate the lung density parameter required for the closed form solution to account for possible variations of parenchyma density between different lung regions, inspiration states and contrast agent concentrations. The general accuracy of the algorithm is evaluated using basic tubular software and hardware phantoms. Furthermore, we present results on the reproducibility of the algorithm with respect to clinical CT scans, varying reconstruction kernels, and repeated acquisitions, which is crucial for longitudinal observations.

  12. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  13. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  14. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  15. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  16. Precise Measurement for Manufacturing

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A metrology instrument known as PhaseCam supports a wide range of applications, from testing large optics to controlling factory production processes. This dynamic interferometer system enables precise measurement of three-dimensional surfaces in the manufacturing industry, delivering speed and high-resolution accuracy in even the most challenging environments.Compact and reliable, PhaseCam enables users to make interferometric measurements right on the factory floor. The system can be configured for many different applications, including mirror phasing, vacuum/cryogenic testing, motion/modal analysis, and flow visualization.

  17. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The emphasis of this grant was focused on precision ephemerides for the Global Positioning System (GPS) satellites for geodynamics applications. During the period of this grant, major activities were in the areas of thermal force modeling, numerical integration accuracy improvement for eclipsing satellites, analysis of GIG '91 campaign data, and the Southwest Pacific campaign data analysis.

  18. Making neurophysiological data analysis reproducible: why and how?

    PubMed

    Delescluse, Matthieu; Franconville, Romain; Joucla, Sébastien; Lieury, Tiffany; Pouzat, Christophe

    2012-01-01

    Reproducible data analysis is an approach aiming at complementing classical printed scientific articles with everything required to independently reproduce the results they present. "Everything" covers here: the data, the computer codes and a precise description of how the code was applied to the data. A brief history of this approach is presented first, starting with what economists have been calling replication since the early eighties to end with what is now called reproducible research in computational data analysis oriented fields like statistics and signal processing. Since efficient tools are instrumental for a routine implementation of these approaches, a description of some of the available ones is presented next. A toy example demonstrates then the use of two open source software programs for reproducible data analysis: the "Sweave family" and the org-mode of emacs. The former is bound to R while the latter can be used with R, Matlab, Python and many more "generalist" data processing software. Both solutions can be used with Unix-like, Windows and Mac families of operating systems. It is argued that neuroscientists could communicate much more efficiently their results by adopting the reproducible research paradigm from their lab books all the way to their articles, thesis and books.

  19. Accuracy of the PAR corneal topography system with spatial misalignment.

    PubMed

    Belin, M W; Zloty, P

    1993-01-01

    The PAR Corneal Topography System is a computerized corneal imaging system which uses close-range raster photogrammetry to measure and produce a topographic map of the corneal surface. Raster photogrammetry is a standard method of extracting object information by projecting a known pattern onto an object and recording the distortion when viewed from an oblique angle. Unlike placido disc based videokeratoscopes, the PAR system requires neither a smooth reflective surface nor precise spatial alignment for accurate imaging. We studied both the accuracy of the system with purposeful misalignment (defocusing) of the test object and determined the ability to image freshly deepithelialized, keratectomized, and photoablated corneas. The PAR system was both accurate and reproducible in imaging calibrated spheres within a defined zone in space. Whole cadaver eyes were imaged both before and immediately after removal of the epithelium, lamellar keratectomy, and laser photoablation. The system demonstrated the ability to image irregular, deepithelialized, and keratectomized corneas. The ability to maintain accuracy without precise alignment and the facility to image freshly deepithelialized and keratectomized corneas may make the system suitable for intraoperative refractive monitoring.

  20. GOCE Precise Science Orbits

    NASA Astrophysics Data System (ADS)

    Bock, Heike; Jäggi, Adrian; Meyer, Ulrich; Beutler, Gerhard; Heinze, Markus; Hugentobler, Urs

    GOCE (Gravity field and steady-state Ocean Circulation Explorer), as the first ESA (European Space Agency) Earth Explorer Core Mission, is dedicated for gravity field recovery of unprece-dented accuracy using data from the gradiometer, its primary science instrument. Data from the secondary instrument, the 12-channel dual-frequency GPS (Global Positioning System) receiver, is used for precise orbit determination of the satellite. These orbits are used to accu-rately geolocate the gradiometer observations and to provide complementary information for the long-wavelength part of the gravity field. A precise science orbit (PSO) product is provided by the GOCE High-Level Processing Facility (HPF) with a precision of about 2 cm and a 1-week latency. The reduced-dynamic and kinematic orbit determination strategies for the PSO product are presented together with results of about one year of data. The focus is on the improvement achieved by the use of empirically derived azimuth-and elevation-dependent variations of the phase center of the GOCE GPS antenna. The orbits are validated with satellite laser ranging (SLR) measurements.

  1. Reproducibility of Research Algorithms in GOES-R Operational Software

    NASA Astrophysics Data System (ADS)

    Kennelly, E.; Botos, C.; Snell, H. E.; Steinfelt, E.; Khanna, R.; Zaccheo, T.

    2012-12-01

    The research to operations transition for satellite observations is an area of active interest as identified by The National Research Council Committee on NASA-NOAA Transition from Research to Operations. Their report recommends improved transitional processes for bridging technology from research to operations. Assuring the accuracy of operational algorithm results as compared to research baselines, called reproducibility in this paper, is a critical step in the GOES-R transition process. This paper defines reproducibility methods and measurements for verifying that operationally implemented algorithms conform to research baselines, demonstrated with examples from GOES-R software development. The approach defines reproducibility for implemented algorithms that produce continuous data in terms of a traditional goodness-of-fit measure (i.e., correlation coefficient), while the reproducibility for discrete categorical data is measured using a classification matrix. These reproducibility metrics have been incorporated in a set of Test Tools developed for GOES-R and the software processes have been developed to include these metrics to validate both the scientific and numerical implementation of the GOES-R algorithms. In this work, we outline the test and validation processes and summarize the current results for GOES-R Level 2+ algorithms.

  2. Reproducibility of the Structural Connectome Reconstruction across Diffusion Methods.

    PubMed

    Prčkovska, Vesna; Rodrigues, Paulo; Puigdellivol Sanchez, Ana; Ramos, Marc; Andorra, Magi; Martinez-Heras, Eloy; Falcon, Carles; Prats-Galino, Albert; Villoslada, Pablo

    2016-01-01

    Analysis of the structural connectomes can lead to powerful insights about the brain's organization and damage. However, the accuracy and reproducibility of constructing the structural connectome done with different acquisition and reconstruction techniques is not well defined. In this work, we evaluated the reproducibility of the structural connectome techniques by performing test-retest (same day) and longitudinal studies (after 1 month) as well as analyzing graph-based measures on the data acquired from 22 healthy volunteers (6 subjects were used for the longitudinal study). We compared connectivity matrices and tract reconstructions obtained with the most typical acquisition schemes used in clinical application: diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI), and diffusion spectrum imaging (DSI). We observed that all techniques showed high reproducibility in the test-retest analysis (correlation >.9). However, HARDI was the only technique with low variability (2%) in the longitudinal assessment (1-month interval). The intraclass coefficient analysis showed the highest reproducibility for the DTI connectome, however, with more sparse connections than HARDI and DSI. Qualitative (neuroanatomical) assessment of selected tracts confirmed the quantitative results showing that HARDI managed to detect most of the analyzed fiber groups and fanning fibers. In conclusion, we found that HARDI acquisition showed the most balanced trade-off between high reproducibility of the connectome, higher rate of path detection and of fanning fibers, and intermediate acquisition times (10-15 minutes), although at the cost of higher appearance of aberrant fibers.

  3. Indirect orthodontic bonding - a modified technique for improved efficiency and precision

    PubMed Central

    Nojima, Lincoln Issamu; Araújo, Adriele Silveira; Alves, Matheus

    2015-01-01

    INTRODUCTION: The indirect bonding technique optimizes fixed appliance installation at the orthodontic office, ensuring precise bracket positioning, among other advantages. In this laboratory clinical phase, material and methods employed in creating the transfer tray are decisive to accuracy. OBJECTIVE: This article describes a simple, efficient and reproducible indirect bonding technique that allows the procedure to be carried out successfully. Variables influencing the orthodontic bonding are analyzed and discussed in order to aid professionals wishing to adopt the indirect bonding technique routinely in their clinical practice. PMID:26154464

  4. Reproducibility in density functional theory calculations of solids.

    PubMed

    Lejaeghere, Kurt; Bihlmayer, Gustav; Björkman, Torbjörn; Blaha, Peter; Blügel, Stefan; Blum, Volker; Caliste, Damien; Castelli, Ivano E; Clark, Stewart J; Dal Corso, Andrea; de Gironcoli, Stefano; Deutsch, Thierry; Dewhurst, John Kay; Di Marco, Igor; Draxl, Claudia; Dułak, Marcin; Eriksson, Olle; Flores-Livas, José A; Garrity, Kevin F; Genovese, Luigi; Giannozzi, Paolo; Giantomassi, Matteo; Goedecker, Stefan; Gonze, Xavier; Grånäs, Oscar; Gross, E K U; Gulans, Andris; Gygi, François; Hamann, D R; Hasnip, Phil J; Holzwarth, N A W; Iuşan, Diana; Jochym, Dominik B; Jollet, François; Jones, Daniel; Kresse, Georg; Koepernik, Klaus; Küçükbenli, Emine; Kvashnin, Yaroslav O; Locht, Inka L M; Lubeck, Sven; Marsman, Martijn; Marzari, Nicola; Nitzsche, Ulrike; Nordström, Lars; Ozaki, Taisuke; Paulatto, Lorenzo; Pickard, Chris J; Poelmans, Ward; Probert, Matt I J; Refson, Keith; Richter, Manuel; Rignanese, Gian-Marco; Saha, Santanu; Scheffler, Matthias; Schlipf, Martin; Schwarz, Karlheinz; Sharma, Sangeeta; Tavazza, Francesca; Thunström, Patrik; Tkatchenko, Alexandre; Torrent, Marc; Vanderbilt, David; van Setten, Michiel J; Van Speybroeck, Veronique; Wills, John M; Yates, Jonathan R; Zhang, Guo-Xu; Cottenier, Stefaan

    2016-03-25

    The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.

  5. Progress on glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Weber, Peter; Westerhoff, Thomas

    2016-03-01

    The Semiconductor Industry is making continuous progress in shrinking feature size developing technologies and process to achieve < 10 nm feature size. The required Overlay specification for successful production is in the range one nanometer or even smaller. Consequently, materials designed into metrology systems of exposure or inspection tools need to fulfill ever tighter specification on the coefficient of thermal expansion (CTE). The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion, the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR®. This paper is focusing on the "Advanced Dilatometer" for determination of the CTE developed at SCHOTT in the recent years and introduced into production in Q1 2015. The achievement for improving the absolute CTE measurement accuracy and the reproducibility are described in detail. Those achievements are compared to the CTE measurement accuracy reported by the Physikalische Technische Bundesanstalt (PTB), the National Metrology Institute of Germany. The CTE homogeneity is of highest importance to achieve nanometer precision on larger scales. Additionally, the paper presents data on the short scale CTE homogeneity and its improvement in the last two years. The data presented in this paper will explain the capability of ZERODUR® to enable the extreme precision required for future generation of lithography equipment and processes.

  6. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  7. Precision grid and hand motion for accurate needle insertion in brachytherapy

    SciTech Connect

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.; McLaughlin, Patrick W.; Shih, Albert J.

    2011-08-15

    Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on the measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2{sup 3} factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.

  8. Reliability and reproducibility of Kienbock's disease staging.

    PubMed

    Goeminne, S; Degreef, I; De Smet, L

    2010-09-01

    We evaluated the interobserver reliability and intraobserver reproducibility of the Lichtman et al. classification for Kienböck's disease by getting four observers with different experience to look at 70 sets of wrist radiographs at different points in time. These observers staged each set of radiographs. Paired comparisons of the observations identified an agreement in 63% of cases and a mean weighted kappa coefficient of 0.64 confirming interobserver reliability. The stage of the involved lunate was reproduced in 78% of the observations with a mean weighted kappa coefficient of 0.81 showing intraobserver reproducibility. This classification for Kienböck's disease has good reliability and reproducibility.

  9. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  10. Utility, reliability and reproducibility of immunoassay multiplex kits.

    PubMed

    Tighe, Paddy; Negm, Ola; Todd, Ian; Fairclough, Lucy

    2013-05-15

    Multiplex technologies are becoming increasingly important in biomarker studies as they enable patterns of biomolecules to be examined, which provide a more comprehensive depiction of disease than individual biomarkers. They are crucial in deciphering these patterns, but it is essential that they are endorsed for reliability, reproducibility and precision. Here we outline the theoretical basis of a variety of multiplex technologies: Bead-based multiplex immunoassays (i.e. Cytometric Bead Arrays, Luminex™ and Bio-Plex Pro™), microtitre plate-based arrays (i.e. Mesoscale Discovery (MSD) and Quantsys BioSciences QPlex), Slide-based Arrays (i.e. FastQuant™) and reverse phase protein arrays. Their utility, reliability and reproducibility are discussed.

  11. Matter power spectrum and the challenge of percent accuracy

    NASA Astrophysics Data System (ADS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N-body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N-body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k<=1 h Mpc-1 and to within three percent at k<=10 h Mpc-1. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k<= 2 h Mpc-1. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L=0.5 h-1Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of Mp=109 h-1Msolar is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  12. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  13. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  14. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  15. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  16. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  17. Precision and Accuracy of Intercontinental Distance Determinations Using Radio Interferometry.

    DTIC Science & Technology

    1983-07-01

    Variations of the dispersion of at least this amount occur in the Mark III system. We cannot place an upper bound on the variations of the dispersion...final two terms will be 0.002 psec and 0.020 psec when t23=2.OxlO 6 sec/sec and vl2-0.02 sec. The latter two values are upper bounds for Earth based...neglected in the derivations in Section 4.1. We will now analyze each of these terms and try to place upper bounds on their contributions to the

  18. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  19. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  20. Explorations in statistics: statistical facets of reproducibility.

    PubMed

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.

  1. A consensus on protein structure accuracy in NMR?

    PubMed

    Billeter, Martin

    2015-02-03

    The precision of an NMR structure may be manipulated by calculation parameters such as calibration factors. Its accuracy is, however, a different issue. In this issue of Structure, Buchner and Güntert present "consensus structure bundles," where precision analysis allows estimation of accuracy.

  2. Precise Fabrication of Electromagnetic-Levitation Coils

    NASA Technical Reports Server (NTRS)

    Ethridge, E.; Curreri, P.; Theiss, J.; Abbaschian, G.

    1985-01-01

    Winding copper tubing on jig ensures reproducible performance. Sequence of steps insures consistent fabrication of levitation-and-melting coils. New method enables technician to produce eight coils per day, 95 percent of them acceptable. Method employs precise step-by-step procedure on specially designed wrapping and winding jig.

  3. The Economics of Reproducibility in Preclinical Research.

    PubMed

    Freedman, Leonard P; Cockburn, Iain M; Simcoe, Timothy S

    2015-06-01

    Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible-in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.

  4. Precision injection molding of freeform optics

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong

    2016-08-01

    Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.

  5. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  6. Reproducible research in vadose zone sciences

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A significant portion of present-day soil and Earth science research is computational, involving complex data analysis pipelines, advanced mathematical and statistical models, and sophisticated computer codes. Opportunities for scientific progress are greatly diminished if reproducing and building o...

  7. Thou Shalt Be Reproducible! A Technology Perspective

    PubMed Central

    Mair, Patrick

    2016-01-01

    This article elaborates on reproducibility in psychology from a technological viewpoint. Modern open source computational environments are shown and explained that foster reproducibility throughout the whole research life cycle, and to which emerging psychology researchers should be sensitized, are shown and explained. First, data archiving platforms that make datasets publicly available are presented. Second, R is advocated as the data-analytic lingua franca in psychology for achieving reproducible statistical analysis. Third, dynamic report generation environments for writing reproducible manuscripts that integrate text, data analysis, and statistical outputs such as figures and tables in a single document are described. Supplementary materials are provided in order to get the reader started with these technologies. PMID:27471486

  8. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  9. Reproducibility of UAV-based photogrammetric surface models

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Smith, Mike; Cammeraat, Erik; Keesstra, Saskia

    2016-04-01

    Soil erosion, rapid geomorphological change and vegetation degradation are major threats to the human and natural environment in many regions. Unmanned Aerial Vehicles (UAVs) and Structure-from-Motion (SfM) photogrammetry are invaluable tools for the collection of highly detailed aerial imagery and subsequent low cost production of 3D landscapes for an assessment of landscape change. Despite the widespread use of UAVs for image acquisition in monitoring applications, the reproducibility of UAV data products has not been explored in detail. This paper investigates this reproducibility by comparing the surface models and orthophotos derived from different UAV flights that vary in flight direction and altitude. The study area is located near Lorca, Murcia, SE Spain, which is a semi-arid medium-relief locale. The area is comprised of terraced agricultural fields that have been abandoned for about 40 years and have suffered subsequent damage through piping and gully erosion. In this work we focused upon variation in cell size, vertical and horizontal accuracy, and horizontal positioning of recognizable landscape features. The results suggest that flight altitude has a significant impact on reconstructed point density and related cell size, whilst flight direction affects the spatial distribution of vertical accuracy. The horizontal positioning of landscape features is relatively consistent between the different flights. We conclude that UAV data products are suitable for monitoring campaigns for land cover purposes or geomorphological mapping, but special care is required when used for monitoring changes in elevation.

  10. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  11. How reproducible are the measurements of leaf fluctuating asymmetry?

    PubMed Central

    2015-01-01

    Fluctuating asymmetry (FA) represents small, non-directional deviations from perfect symmetry in morphological characters. FA is generally assumed to increase in response to stress; therefore, FA is frequently used in ecological studies as an index of environmental or genetic stress experienced by an organism. The values of FA are usually small, and therefore the reliable detection of FA requires precise measurements. The reproducibility of fluctuating asymmetry (FA) was explored by comparing the results of measurements of scanned images of 100 leaves of downy birch (Betula pubescens) conducted by 31 volunteer scientists experienced in studying plant FA. The median values of FA varied significantly among the participants, from 0.000 to 0.074, and the coefficients of variation in FA for individual leaves ranged from 25% to 179%. The overall reproducibility of the results among the participants was rather low (0.074). Variation in instruments and methods used by the participants had little effect on the reported FA values, but the reproducibility of the measurements increased by 30% following exclusion of data provided by seven participants who had modified the suggested protocol for leaf measurements. The scientists working with plant FA are advised to pay utmost attention to adequate and detailed description of their data acquisition protocols in their forthcoming publications, because all characteristics of instruments and methods need to be controlled to increase the quality and reproducibility of the data. Whenever possible, the images of all measured objects and the results of primary measurements should be published as electronic appendices to scientific papers. PMID:26157612

  12. How reproducible are the measurements of leaf fluctuating asymmetry?

    PubMed

    Kozlov, Mikhail V

    2015-01-01

    Fluctuating asymmetry (FA) represents small, non-directional deviations from perfect symmetry in morphological characters. FA is generally assumed to increase in response to stress; therefore, FA is frequently used in ecological studies as an index of environmental or genetic stress experienced by an organism. The values of FA are usually small, and therefore the reliable detection of FA requires precise measurements. The reproducibility of fluctuating asymmetry (FA) was explored by comparing the results of measurements of scanned images of 100 leaves of downy birch (Betula pubescens) conducted by 31 volunteer scientists experienced in studying plant FA. The median values of FA varied significantly among the participants, from 0.000 to 0.074, and the coefficients of variation in FA for individual leaves ranged from 25% to 179%. The overall reproducibility of the results among the participants was rather low (0.074). Variation in instruments and methods used by the participants had little effect on the reported FA values, but the reproducibility of the measurements increased by 30% following exclusion of data provided by seven participants who had modified the suggested protocol for leaf measurements. The scientists working with plant FA are advised to pay utmost attention to adequate and detailed description of their data acquisition protocols in their forthcoming publications, because all characteristics of instruments and methods need to be controlled to increase the quality and reproducibility of the data. Whenever possible, the images of all measured objects and the results of primary measurements should be published as electronic appendices to scientific papers.

  13. Assessing the reproducibility of discriminant function analyses

    PubMed Central

    Andrew, Rose L.; Albert, Arianne Y.K.; Renaut, Sebastien; Rennison, Diana J.; Bock, Dan G.

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the

  14. High precision modeling for fundamental physics experiments

    NASA Astrophysics Data System (ADS)

    Rievers, Benny; Nesemann, Leo; Costea, Adrian; Andres, Michael; Stephan, Ernst P.; Laemmerzahl, Claus

    With growing experimental accuracies and high precision requirements for fundamental physics space missions the needs for accurate numerical modeling techniques are increasing. Motivated by the challenge of length stability in cavities and optical resonators we propose the develop-ment of a high precision modeling tool for the simulation of thermomechanical effects up to a numerical precision of 10-20 . Exemplary calculations for simplified test cases demonstrate the general feasibility of high precision calculations and point out the high complexity of the task. A tool for high precision analysis of complex geometries will have to use new data types, advanced FE solver routines and implement new methods for the evaluation of numerical precision.

  15. Reproducibility and Comparability of Computational Models for Astrocyte Calcium Excitability

    PubMed Central

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2017-01-01

    The scientific community across all disciplines faces the same challenges of ensuring accessibility, reproducibility, and efficient comparability of scientific results. Computational neuroscience is a rapidly developing field, where reproducibility and comparability of research results have gained increasing interest over the past years. As the number of computational models of brain functions is increasing, we chose to address reproducibility using four previously published computational models of astrocyte excitability as an example. Although not conventionally taken into account when modeling neuronal systems, astrocytes have been shown to take part in a variety of in vitro and in vivo phenomena including synaptic transmission. Two of the selected astrocyte models describe spontaneous calcium excitability, and the other two neurotransmitter-evoked calcium excitability. We specifically addressed how well the original simulation results can be reproduced with a reimplementation of the models. Additionally, we studied how well the selected models can be reused and whether they are comparable in other stimulation conditions and research settings. Unexpectedly, we found out that three of the model publications did not give all the necessary information required to reimplement the models. In addition, we were able to reproduce the original results of only one of the models completely based on the information given in the original publications and in the errata. We actually found errors in the equations provided by two of the model publications; after modifying the equations accordingly, the original results were reproduced more accurately. Even though the selected models were developed to describe the same biological event, namely astrocyte calcium excitability, the models behaved quite differently compared to one another. Our findings on a specific set of published astrocyte models stress the importance of proper validation of the models against experimental wet

  16. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  17. The Challenge of Reproducibility and Accuracy in Nutrition Research: Resources and Pitfalls.

    PubMed

    Sorkin, Barbara C; Kuszak, Adam J; Williamson, John S; Hopp, D Craig; Betz, Joseph M

    2016-03-01

    Inconsistent and contradictory results from nutrition studies conducted by different investigators continue to emerge, in part because of the inherent variability of natural products, as well as the unknown and therefore uncontrolled variables in study populations and experimental designs. Given these challenges inherent in nutrition research, it is critical for the progress of the field that researchers strive to minimize variability within studies and enhance comparability between studies by optimizing the characterization, control, and reporting of products, reagents, and model systems used, as well as the rigor and reporting of experimental designs, protocols, and data analysis. Here we describe some recent developments relevant to research on plant-derived products used in nutrition research, highlight some resources for optimizing the characterization and reporting of research using these products, and describe some of the pitfalls that may be avoided by adherence to these recommendations.

  18. The Challenge of Reproducibility and Accuracy in Nutrition Research: Resources and Pitfalls1234

    PubMed Central

    Kuszak, Adam J; Williamson, John S; Hopp, D Craig; Betz, Joseph M

    2016-01-01

    Inconsistent and contradictory results from nutrition studies conducted by different investigators continue to emerge, in part because of the inherent variability of natural products, as well as the unknown and therefore uncontrolled variables in study populations and experimental designs. Given these challenges inherent in nutrition research, it is critical for the progress of the field that researchers strive to minimize variability within studies and enhance comparability between studies by optimizing the characterization, control, and reporting of products, reagents, and model systems used, as well as the rigor and reporting of experimental designs, protocols, and data analysis. Here we describe some recent developments relevant to research on plant-derived products used in nutrition research, highlight some resources for optimizing the characterization and reporting of research using these products, and describe some of the pitfalls that may be avoided by adherence to these recommendations. PMID:26980822

  19. High-precision measurement of magnetic penetration depth in superconducting films

    NASA Astrophysics Data System (ADS)

    He, X.; Gozar, A.; Sundling, R.; Božović, I.

    2016-11-01

    The magnetic penetration depth (λ) in thin superconducting films is usually measured by the mutual inductance technique. The accuracy of this method has been limited by uncertainties in the geometry of the solenoids and in the film position and thickness, by parasitic coupling between the coils, etc. Here, we present several improvements in the apparatus and the method. To ensure the precise thickness of the superconducting layer, we engineer the films at atomic level using atomic-layer-by-layer molecular beam epitaxy. In this way, we also eliminate secondary-phase precipitates, grain boundaries, and pinholes that are common with other deposition methods and that artificially increase the field transmission and thus the apparent λ. For better reproducibility, the thermal stability of our closed-cycle cryocooler used to control the temperature of the mutual inductance measurement has been significantly improved by inserting a custom-built thermal conductivity damper. Next, to minimize the uncertainties in the geometry, we fused a pair of small yet precisely wound coils into a single sapphire block machined to a high precision. The sample is spring-loaded to exactly the same position with respect to the solenoids. Altogether, we can measure the absolute value of λ with the accuracy better than ±1%.

  20. High-precision measurement of magnetic penetration depth in superconducting films

    SciTech Connect

    He, X.; Gozar, A.; Sundling, R.; Božović, I.

    2016-11-01

    We report that the magnetic penetration depth (λ) in thin superconducting films is usually measured by the mutual inductance technique. The accuracy of this method has been limited by uncertainties in the geometry of the solenoids and in the film position and thickness, by parasitic coupling between the coils, etc. Here, we present several improvements in the apparatus and the method. To ensure the precise thickness of the superconducting layer, we engineer the films at atomic level using atomic-layer-by-layer molecular beam epitaxy. In this way, we also eliminate secondary-phase precipitates, grain boundaries, and pinholes that are common with other deposition methods and that artificially increase the field transmission and thus the apparent λ. For better reproducibility, the thermal stability of our closed-cycle cryocooler used to control the temperature of the mutual inductance measurement has been significantly improved by inserting a custom-built thermal conductivity damper. Next, to minimize the uncertainties in the geometry, we fused a pair of small yet precisely wound coils into a single sapphire block machined to a high precision. Lastly, the sample is spring-loaded to exactly the same position with respect to the solenoids. Altogether, we can measure the absolute value of λ with the accuracy better than ±1%.

  1. High-precision measurement of magnetic penetration depth in superconducting films

    DOE PAGES

    He, X.; Gozar, A.; Sundling, R.; ...

    2016-11-01

    We report that the magnetic penetration depth (λ) in thin superconducting films is usually measured by the mutual inductance technique. The accuracy of this method has been limited by uncertainties in the geometry of the solenoids and in the film position and thickness, by parasitic coupling between the coils, etc. Here, we present several improvements in the apparatus and the method. To ensure the precise thickness of the superconducting layer, we engineer the films at atomic level using atomic-layer-by-layer molecular beam epitaxy. In this way, we also eliminate secondary-phase precipitates, grain boundaries, and pinholes that are common with other depositionmore » methods and that artificially increase the field transmission and thus the apparent λ. For better reproducibility, the thermal stability of our closed-cycle cryocooler used to control the temperature of the mutual inductance measurement has been significantly improved by inserting a custom-built thermal conductivity damper. Next, to minimize the uncertainties in the geometry, we fused a pair of small yet precisely wound coils into a single sapphire block machined to a high precision. Lastly, the sample is spring-loaded to exactly the same position with respect to the solenoids. Altogether, we can measure the absolute value of λ with the accuracy better than ±1%.« less

  2. Relevance relations for the concept of reproducibility

    PubMed Central

    Atmanspacher, H.; Bezzola Lambert, L.; Folkers, G.; Schubiger, P. A.

    2014-01-01

    The concept of reproducibility is widely considered a cornerstone of scientific methodology. However, recent problems with the reproducibility of empirical results in large-scale systems and in biomedical research have cast doubts on its universal and rigid applicability beyond the so-called basic sciences. Reproducibility is a particularly difficult issue in interdisciplinary work where the results to be reproduced typically refer to different levels of description of the system considered. In such cases, it is mandatory to distinguish between more and less relevant features, attributes or observables of the system, depending on the level at which they are described. For this reason, we propose a scheme for a general ‘relation of relevance’ between the level of complexity at which a system is considered and the granularity of its description. This relation implies relevance criteria for particular selected aspects of a system and its description, which can be operationally implemented by an interlevel relation called ‘contextual emergence’. It yields a formally sound and empirically applicable procedure to translate between descriptive levels and thus construct level-specific criteria for reproducibility in an overall consistent fashion. Relevance relations merged with contextual emergence challenge the old idea of one fundamental ontology from which everything else derives. At the same time, our proposal is specific enough to resist the backlash into a relativist patchwork of unconnected model fragments. PMID:24554574

  3. Reproducibility of graph metrics in FMRI networks.

    PubMed

    Telesford, Qawi K; Morgan, Ashley R; Hayasaka, Satoru; Simpson, Sean L; Barret, William; Kraft, Robert A; Mozolic, Jennifer L; Laurienti, Paul J

    2010-01-01

    The reliability of graph metrics calculated in network analysis is essential to the interpretation of complex network organization. These graph metrics are used to deduce the small-world properties in networks. In this study, we investigated the test-retest reliability of graph metrics from functional magnetic resonance imaging data collected for two runs in 45 healthy older adults. Graph metrics were calculated on data for both runs and compared using intraclass correlation coefficient (ICC) statistics and Bland-Altman (BA) plots. ICC scores describe the level of absolute agreement between two measurements and provide a measure of reproducibility. For mean graph metrics, ICC scores were high for clustering coefficient (ICC = 0.86), global efficiency (ICC = 0.83), path length (ICC = 0.79), and local efficiency (ICC = 0.75); the ICC score for degree was found to be low (ICC = 0.29). ICC scores were also used to generate reproducibility maps in brain space to test voxel-wise reproducibility for unsmoothed and smoothed data. Reproducibility was uniform across the brain for global efficiency and path length, but was only high in network hubs for clustering coefficient, local efficiency, and degree. BA plots were used to test the measurement repeatability of all graph metrics. All graph metrics fell within the limits for repeatability. Together, these results suggest that with exception of degree, mean graph metrics are reproducible and suitable for clinical studies. Further exploration is warranted to better understand reproducibility across the brain on a voxel-wise basis.

  4. Precise predictions for slepton pair production

    SciTech Connect

    Ayres Freitas; Andreas von Manteuffel

    2002-11-07

    At a future linear collider, the masses and couplings of scalar leptons can be measured with high accuracy, thus requiring precise theoretical predictions for the relevant processes. In this work, after a discussion of the expected experimental precision, the complete one-loop corrections to smuon and selectron pair production in the MSSM are presented and the effect of different contributions in the result is analyzed.

  5. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  6. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  7. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  8. Reproducibility of clinical events adjudications in a trial of venous thromboembolism prevention.

    PubMed

    Girard, P; Penaloza, A; Parent, F; Gable, B; Sanchez, O; Durieux, P; Hausfater, P; Dambrine, S; Meyer, G; Roy, P-M

    2017-04-01

    Essentials The reproducibility of Clinical Events Committee (CEC) adjudications is almost unexplored. A random selection of events from a venous thromboembolism trial was blindly re-adjudicated. 'Unexplained sudden deaths' (possible fatal embolism) explained most discordant adjudications. A precise definition for CEC adjudication of this type of events is needed and proposed.

  9. Making Early Modern Medicine: Reproducing Swedish Bitters.

    PubMed

    Ahnfelt, Nils-Otto; Fors, Hjalmar

    2016-05-01

    Historians of science and medicine have rarely applied themselves to reproducing the experiments and practices of medicine and pharmacy. This paper delineates our efforts to reproduce "Swedish Bitters," an early modern composite medicine in wide European use from the 1730s to the present. In its original formulation, it was made from seven medicinal simples: aloe, rhubarb, saffron, myrrh, gentian, zedoary and agarikon. These were mixed in alcohol together with some theriac, a composite medicine of classical origin. The paper delineates the compositional history of Swedish Bitters and the medical rationale underlying its composition. It also describes how we go about to reproduce the medicine in a laboratory using early modern pharmaceutical methods, and analyse it using contemporary methods of pharmaceutical chemistry. Our aim is twofold: first, to show how reproducing medicines may provide a path towards a deeper understanding of the role of sensual and practical knowledge in the wider context of early modern medical culture; and second, how it may yield interesting results from the point of view of contemporary pharmaceutical science.

  10. Natural Disasters: Earth Science Readings. Reproducibles.

    ERIC Educational Resources Information Center

    Lobb, Nancy

    Natural Disasters is a reproducible teacher book that explains what scientists believe to be the causes of a variety of natural disasters and suggests steps that teachers and students can take to be better prepared in the event of a natural disaster. It contains both student and teacher sections. Teacher sections include vocabulary, an answer key,…

  11. Europe Today: An Atlas of Reproducible Pages.

    ERIC Educational Resources Information Center

    World Eagle, Inc., Wellesley, MA.

    Illustrative black and white maps, tables, and graphs designed for clear reproducibility depict Europe's size, population, resources, commodities, trade, cities, schooling, jobs, energy, industry, demographic statistics, food, and agriculture. Also included are 33 United States Department of State individual country maps. This volume is intended…

  12. Precise Countersinking Tool

    NASA Technical Reports Server (NTRS)

    Jenkins, Eric S.; Smith, William N.

    1992-01-01

    Tool countersinks holes precisely with only portable drill; does not require costly machine tool. Replaceable pilot stub aligns axis of tool with centerline of hole. Ensures precise cut even with imprecise drill. Designed for relatively low cutting speeds.

  13. ITK: enabling reproducible research and open science

    PubMed Central

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46. PMID:24600387

  14. Regional Reproducibility of BOLD Calibration Parameter M, OEF and Resting-State CMRO2 Measurements with QUO2 MRI

    PubMed Central

    Lajoie, Isabelle; Tancredi, Felipe B.; Hoge, Richard D.

    2016-01-01

    The current generation of calibrated MRI methods goes beyond simple localization of task-related responses to allow the mapping of resting-state cerebral metabolic rate of oxygen (CMRO2) in micromolar units and estimation of oxygen extraction fraction (OEF). Prior to the adoption of such techniques in neuroscience research applications, knowledge about the precision and accuracy of absolute estimates of CMRO2 and OEF is crucial and remains unexplored to this day. In this study, we addressed the question of methodological precision by assessing the regional inter-subject variance and intra-subject reproducibility of the BOLD calibration parameter M, OEF, O2 delivery and absolute CMRO2 estimates derived from a state-of-the-art calibrated BOLD technique, the QUantitative O2 (QUO2) approach. We acquired simultaneous measurements of CBF and R2* at rest and during periods of hypercapnia (HC) and hyperoxia (HO) on two separate scan sessions within 24 hours using a clinical 3 T MRI scanner. Maps of M, OEF, oxygen delivery and CMRO2, were estimated from the measured end-tidal O2, CBF0, CBFHC/HO and R2*HC/HO. Variability was assessed by computing the between-subject coefficients of variation (bwCV) and within-subject CV (wsCV) in seven ROIs. All tests GM-averaged values of CBF0, M, OEF, O2 delivery and CMRO2 were: 49.5 ± 6.4 mL/100 g/min, 4.69 ± 0.91%, 0.37 ± 0.06, 377 ± 51 μmol/100 g/min and 143 ± 34 μmol/100 g/min respectively. The variability of parameter estimates was found to be the lowest when averaged throughout all GM, with general trends toward higher CVs when averaged over smaller regions. Among the MRI measurements, the most reproducible across scans was R2*0 (wsCVGM = 0.33%) along with CBF0 (wsCVGM = 3.88%) and R2*HC (wsCVGM = 6.7%). CBFHC and R2*HO were found to have a higher intra-subject variability (wsCVGM = 22.4% and wsCVGM = 16% respectively), which is likely due to propagation of random measurement errors, especially for CBFHC due to the low

  15. Precision agricultural systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture is a new farming practice that has been developing since late 1980s. It has been variously referred to as precision farming, prescription farming, site-specific crop management, to name but a few. There are numerous definitions for precision agriculture, but the central concept...

  16. Percolating silicon nanowire networks with highly reproducible electrical properties.

    PubMed

    Serre, Pauline; Mongillo, Massimo; Periwal, Priyanka; Baron, Thierry; Ternon, Céline

    2015-01-09

    Here, we report the morphological and electrical properties of self-assembled silicon nanowires networks, also called Si nanonets. At the macroscopic scale, the nanonets involve several millions of nanowires. So, the observed properties should result from large scale statistical averaging, minimizing thus the discrepancies that occur from one nanowire to another. Using a standard filtration procedure, the so-obtained Si nanonets are highly reproducible in terms of their morphology, with a Si nanowire density precisely controlled during the nanonet elaboration. In contrast to individual Si nanowires, the electrical properties of Si nanonets are highly consistent, as demonstrated here by the similar electrical properties obtained in hundreds of Si nanonet-based devices. The evolution of the Si nanonet conductance with Si nanowire density demonstrates that Si nanonets behave like standard percolating media despite the presence of numerous nanowire-nanowire intersecting junctions into the nanonets and the native oxide shell surrounding the Si nanowires. Moreover, when silicon oxidation is prevented or controlled, the electrical properties of Si nanonets are stable over many months. As a consequence, Si nanowire-based nanonets constitute a promising flexible material with stable and reproducible electrical properties at the macroscopic scale while being composed of nanoscale components, which confirms the Si nanonet potential for a wide range of applications including flexible electronic, sensing and photovoltaic applications.

  17. Ranking and averaging independent component analysis by reproducibility (RAICAR).

    PubMed

    Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping

    2008-06-01

    Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data.

  18. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  19. Automatic precision measurement of spectrograms.

    PubMed

    Palmer, B A; Sansonetti, C J; Andrew, K L

    1978-08-01

    A fully automatic comparator has been designed and implemented to determine precision wavelengths from high-resolution spectrograms. The accuracy attained is superior to that of an experienced operator using a semiautomatic comparator with a photoelectric setting device. The system consists of a comparator, slightly modified for simultaneous data acquisition from two parallel scans of the spectrogram, interfaced to a minicomputer. The software which controls the system embodies three innovations of special interest. (1) Data acquired from two parallel scans are compared and used to separate unknown from standard lines, to eliminate spurious lines, to identify blends of unknown with standard lines, to improve the accuracy of the measured positions, and to flag lines which require special examination. (2) Two classes of lines are automatically recognized and appropriate line finding methods are applied to each. This provides precision measurement for both simple and complex line profiles. (3) Wavelength determination using a least-squares fitted grating equation is supported in addition to polynomial interpolation. This is most useful in spectral regions with sparsely distributed standards. The principles and implementation of these techniques are fully described.

  20. A Framework for Reproducible Latent Fingerprint Enhancements.

    PubMed

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.

  1. Tools and techniques for computational reproducibility.

    PubMed

    Piccolo, Stephen R; Frampton, Michael B

    2016-07-11

    When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches.

  2. Accuracy and Injection Force of the Gla-300 Injection Device Compared With Other Commercialized Disposable Insulin Pens

    PubMed Central

    Klonoff, David; Nayberg, Irina; Thonius, Marissa; See, Florian; Abdel-Tawab, Mona; Erbstein, Frank; Haak, Thomas

    2015-01-01

    Background: To deliver insulin glargine 300 U/mL (Gla-300), the widely used SoloSTAR® pen has been modified to allow for accurate and precise delivery of required insulin units in one-third of the volume compared with insulin glargine 100 U/mL, while improving usability. Here we compare the accuracy and injection force of 3 disposable insulin pens: Gla-300 SoloSTAR®, FlexPen®, and KwikPen™. Methods: For the accuracy assessment, 60 of each of the 3 tested devices were used for the delivery of 3 different doses (1 U, half-maximal dose, and maximal dose), which were measured gravimetrically. For the injection force assessment, 20 pens of each of the 3 types were tested twice at half-maximal and once at maximal dose, at an injection speed of 6 U/s. Results: All tested pens met the International Organization for Standardization (ISO) requirements for dosing accuracy, with Gla-300 SoloSTAR showing the lowest between-dose variation (greatest reproducibility) at all dose levels. Mean injection force was significantly lower for Gla-300 SoloSTAR than for the other 2 pens at both half maximal and maximal doses (P < .0271). Conclusion: All tested pens were accurate according to ISO criteria, and the Gla-300 SoloSTAR pen displayed the greatest reproducibility and lowest injection force of any of the 3 tested devices. PMID:26311720

  3. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  4. Precision optical metrology without lasers

    NASA Astrophysics Data System (ADS)

    Bergmann, Ralf B.; Burke, Jan; Falldorf, Claas

    2015-07-01

    Optical metrology is a key technique when it comes to precise and fast measurement with a resolution down to the micrometer or even nanometer regime. The choice of a particular optical metrology technique and the quality of results depends on sample parameters such as size, geometry and surface roughness as well as user requirements such as resolution, measurement time and robustness. Interferometry-based techniques are well known for their low measurement uncertainty in the nm range, but usually require careful isolation against vibration and a laser source that often needs shielding for reasons of eye-safety. In this paper, we concentrate on high precision optical metrology without lasers by using the gradient based measurement technique of deflectometry and the finite difference based technique of shear interferometry. Careful calibration of deflectometry systems allows one to investigate virtually all kinds of reflecting surfaces including aspheres or free-form surfaces with measurement uncertainties below the μm level. Computational Shear Interferometry (CoSI) allows us to combine interferometric accuracy and the possibility to use cheap and eye-safe low-brilliance light sources such as e.g. fiber coupled LEDs or even liquid crystal displays. We use CoSI e.g. for quantitative phase contrast imaging in microscopy. We highlight the advantages of both methods, discuss their transfer functions and present results on the precision of both techniques.

  5. Stability and accuracy of the sweep rate measurements for LLNL optical streak cameras

    SciTech Connect

    Montgomery, D.S.

    1989-08-04

    Precise pulse shaping is vital for present and future high-power lasers that will attempt to achieve low-entropy laser-fusion implosions. Multichannel, streak-camera-based systems are used to make such measurements. Such systems must be accurately calibrated in order to correct for time-base and flat-field variations. We use an on-line calibration system in order to measure the sweep rate, and in our recent work we have evaluated the accuracy of this measurement technique. By analyzing a large number of calibrations, and the effect of noise on our measurement technique, we have concluded that the sweep rate for our streak camera systems is reproducible to a least {plus minus}1.2% and that our measurement technique contributes an additional {plus minus}0.5% uncertainty in the measurement. 18 refs., 3 figs., 1 tab.

  6. [Precision and personalized medicine].

    PubMed

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  7. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  8. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  9. Reproducibility and Validity of a Handheld Spirometer

    PubMed Central

    Barr, R Graham; Stemple, Kimberly J.; Mesia-Vela, Sonia; Basner, Robert C.; Derk, Susan; Henneberger, Paul; Milton, Donald K; Taveras, Brenda

    2013-01-01

    Background Handheld spirometers have several advantages over desktop spirometers but worries persist regarding their reproducibility and validity. We undertook an independent examination of an ultrasonic flow-sensing handheld spirometer. Methods Laboratory methods included reproducibility and validity testing using a waveform generator with standard American Thoracic Society (ATS) waveforms, in-line testing, calibration adaptor testing, and compression of the mouthpiece. Clinical testing involved repeated testing of 24 spirometry-naive volunteers and comparison to a volume-sensing dry rolling seal spirometer. Results The EasyOne Diagnostic spirometer exceeded standard thresholds of acceptability for ATS waveforms. In-line testing yielded valid results with relative differences (mean ± SD) between the EasyOne and the reference spirometer for the forced vital capacity (FVC) of 0.03±0.23 L and the forced expiratory volume in one second (FEV1) of −0.06±0.09 L. The calibration adaptor showed no appreciable problems, but extreme compression of the mouthpiece reduced measures. In clinical testing, coefficients of variation and limits of agreement were, respectively: 3.3% and 0.24 L for the FVC; 2.6% and 0.18 L for the FEV1; and 1.9% and 0.05 for the FEV1/FVC ratio. The EasyOne yielded lower values than the reference spirometry (FVC: −0.12 L; FEV1: −0.17 L; FEV1/FVC ratio: −0.02). Limits of agreement were within criteria for FVC but not for the FEV1, possibly due to a training effect. Conclusion The EasyOne spirometer yielded generally reproducible results that were generally valid compared to laboratory-based spirometry. The use of this handheld spirometer in clinical, occupational and research settings seems justified. PMID:18364054

  10. Queer nuclear families? Reproducing and transgressing heteronormativity.

    PubMed

    Folgerø, Tor

    2008-01-01

    During the past decade the public debate on gay and lesbian adoptive rights has been extensive in the Norwegian media. The debate illustrates how women and men planning to raise children in homosexual family constellations challenge prevailing cultural norms and existing concepts of kinship and family. The article discusses how lesbian mothers and gay fathers understand and redefine their own family practices. An essential point in this article is the fundamental ambiguity in these families' accounts of themselves-how they simultaneously transgress and reproduce heteronormative assumptions about childhood, fatherhood, motherhood, family and kinship.

  11. Towards reproducible, scalable lateral molecular electronic devices

    SciTech Connect

    Durkan, Colm Zhang, Qian

    2014-08-25

    An approach to reproducibly fabricate molecular electronic devices is presented. Lateral nanometer-scale gaps with high yield are formed in Au/Pd nanowires by a combination of electromigration and Joule-heating-induced thermomechanical stress. The resulting nanogap devices are used to measure the electrical properties of small numbers of two different molecular species with different end-groups, namely 1,4-butane dithiol and 1,5-diamino-2-methylpentane. Fluctuations in the current reveal that in the case of the dithiol molecule devices, individual molecules conduct intermittently, with the fluctuations becoming more pronounced at larger biases.

  12. Open and reproducible global land use classification

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel; Václavík, Tomáš; Pross, Benjamin

    2015-04-01

    Researchers led by the Helmholtz Centre for Environmental research (UFZ) developed a new world map of land use systems based on over 30 diverse indicators (http://geoportal.glues.geo.tu-dresden.de/stories/landsystemarchetypes.html) of land use intensity, climate and environmental and socioeconomic factors. They identified twelve land system archetypes (LSA) using a data-driven classification algorithm (self-organizing maps) to assess global impacts of land use on the environment, and found unexpected similarities across global regions. We present how the algorithm behind this analysis can be published as an executable web process using 52°North WPS4R (https://wiki.52north.org/bin/view/Geostatistics/WPS4R) within the GLUES project (http://modul-a.nachhaltiges-landmanagement.de/en/scientific-coordination-glues/). WPS4R is an open source collaboration platform for researchers, analysts and software developers to publish R scripts (http://www.r-project.org/) as a geo-enabled OGC Web Processing Service (WPS) process. The interoperable interface to call the geoprocess allows both reproducibility of the analysis and integration of user data without knowledge about web services or classification algorithms. The open platform allows everybody to replicate the analysis in their own environments. The LSA WPS process has several input parameters, which can be changed via a simple web interface. The input parameters are used to configure both the WPS environment and the LSA algorithm itself. The encapsulation as a web process allows integration of non-public datasets, while at the same time the publication requires a well-defined documentation of the analysis. We demonstrate this platform specifically to domain scientists and show how reproducibility and open source publication of analyses can be enhanced. We also discuss future extensions of the reproducible land use classification, such as the possibility for users to enter their own areas of interest to the system and

  13. Converting Human Proteins into Precision Polymer Therapeutics.

    PubMed

    Boldt, Felix; Liu, Weina; Wu, Yuzhou; Weil, Tanja

    2016-01-01

    Cells as the smallest unit of life rely on precise macromolecules and programmable supramolecular interactions to accomplish the various vital functions. To translate such strategies to precisely control architectures and interactions into the synthetic world represents an exciting endeavor. Polymers with distinct structures, sequences and architectures are still challenging to achieve. However, in particular for biomedical applications, reproducible synthesis, narrow dispersities, tunable functionalities and additionally biocompatibility of the polymeric materials are crucial. Polymers derived from protein precursors provide many advantages of proteins such as precise monomer sequences and contour lengths, biodegradability and multiple functionalities, which can be synergistically combined with the valuable features of synthetic polymers e.g. stability, tunable solubility and molecular weights. The resulting polymeric biohybrid materials offer many applications ranging from drug delivery to biosensing and therapeutic hydrogels. This minireview summarizes the most recent advances in this field.

  14. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  15. Precision antenna reflector structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1985-01-01

    The assembly of the Large Precise Reflector Infrared Telescope is detailed. Also given are the specifications for the Aft Cargo Carrier and the Large Precision Reflector structure. Packaging concepts and options, stowage depth and support truss geometry are also considered. An example of a construction scenario is given.

  16. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  17. Adaptive inverse control of neural spatiotemporal spike patterns with a reproducing kernel Hilbert space (RKHS) framework.

    PubMed

    Li, Lin; Park, Il Memming; Brockmeier, Austin; Chen, Badong; Seth, Sohan; Francis, Joseph T; Sanchez, Justin C; Príncipe, José C

    2013-07-01

    The precise control of spiking in a population of neurons via applied electrical stimulation is a challenge due to the sparseness of spiking responses and neural system plasticity. We pose neural stimulation as a system control problem where the system input is a multidimensional time-varying signal representing the stimulation, and the output is a set of spike trains; the goal is to drive the output such that the elicited population spiking activity is as close as possible to some desired activity, where closeness is defined by a cost function. If the neural system can be described by a time-invariant (homogeneous) model, then offline procedures can be used to derive the control procedure; however, for arbitrary neural systems this is not tractable. Furthermore, standard control methodologies are not suited to directly operate on spike trains that represent both the target and elicited system response. In this paper, we propose a multiple-input multiple-output (MIMO) adaptive inverse control scheme that operates on spike trains in a reproducing kernel Hilbert space (RKHS). The control scheme uses an inverse controller to approximate the inverse of the neural circuit. The proposed control system takes advantage of the precise timing of the neural events by using a Schoenberg kernel defined directly in the space of spike trains. The Schoenberg kernel maps the spike train to an RKHS and allows linear algorithm to control the nonlinear neural system without the danger of converging to local minima. During operation, the adaptation of the controller minimizes a difference defined in the spike train RKHS between the system and the target response and keeps the inverse controller close to the inverse of the current neural circuit, which enables adapting to neural perturbations. The results on a realistic synthetic neural circuit show that the inverse controller based on the Schoenberg kernel outperforms the decoding accuracy of other models based on the conventional rate

  18. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  19. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  20. Methods to increase reproducibility in differential gene expression via meta-analysis

    PubMed Central

    Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh

    2017-01-01

    Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930

  1. PSYCHOLOGY. Estimating the reproducibility of psychological science.

    PubMed

    2015-08-28

    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

  2. Reproducibility and reusability of scientific software

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2017-01-01

    Information science and technology has been becoming an integral part of astronomy research, and due to the consistent growth in the size and impact of astronomical databases, that trend is bound to continue. While software is a vital part information systems and data analysis processes, in many cases the importance of the software and the standards of reporting on the use of source code has not yet elevated in the scientific communication process to the same level as other parts of the research. The purpose of the discussion is to examine the role of software in the scientific communication process in the light of transparency, reproducibility, and reusability of the research, as well as discussing software in astronomy in comparison to other disciplines.

  3. Is Isolated Nocturnal Hypertension A Reproducible Phenotype?

    PubMed Central

    Goldsmith, Jeff; Muntner, Paul; Diaz, Keith M.; Reynolds, Kristi; Schwartz, Joseph E.; Shimbo, Daichi

    2016-01-01

    BACKGROUND Isolated nocturnal hypertension (INH), defined as nocturnal without daytime hypertension on ambulatory blood pressure (BP) monitoring (ABPM), has been observed to be associated with an increased risk of cardiovascular disease (CVD) events and mortality. The aim of this study was to determine the short-term reproducibility of INH. METHODS The Improving the Detection of Hypertension Study enrolled a community-based sample of adults (N = 282) in upper Manhattan without CVD, renal failure, or treated hypertension. Each participant completed two 24-hour ABPM recordings (ABPM1: first recording and ABPM2: second recording) with a mean ± SD time interval of 33±17 days between recordings. Daytime hypertension was defined as mean awake systolic/diastolic BP ≥ 135/85mm Hg; nocturnal hypertension as mean sleep systolic/diastolic BP ≥ 120/70mm Hg; INH as nocturnal without daytime hypertension; isolated daytime hypertension (IDH) as daytime without nocturnal hypertension; day and night hypertension (DNH) as daytime and nocturnal hypertension, and any ambulatory hypertension as having daytime and/or nocturnal hypertension. RESULTS On ABPM1, 26 (9.2%), 21 (7.4%), and 50 (17.7%) participants had INH, IDH, and DNH, respectively. On ABPM2, 24 (8.5%), 19 (6.7%), and 54 (19.1%) had INH, IDH, and DNH, respectively. The kappa statistics were 0.21 (95% confidence interval (CI) 0.04–0.38), 0.25 (95% CI 0.06–0.44), and 0.65 (95% CI 0.53–0.77) for INH, IDH, and DNH respectively; and 0.72 (95% CI 0.63–0.81) for having any ambulatory hypertension. CONCLUSIONS Our results suggest that INH and IDH are poorly reproducible phenotypes, and that ABPM should be primarily used to identify individuals with daytime hypertension and/or nocturnal hypertension. PMID:25904648

  4. Is Grannum grading of the placenta reproducible?

    NASA Astrophysics Data System (ADS)

    Moran, Mary; Ryan, John; Brennan, Patrick C.; Higgins, Mary; McAuliffe, Fionnuala M.

    2009-02-01

    Current ultrasound assessment of placental calcification relies on Grannum grading. The aim of this study was to assess if this method is reproducible by measuring inter- and intra-observer variation in grading placental images, under strictly controlled viewing conditions. Thirty placental images were acquired and digitally saved. Five experienced sonographers independently graded the images on two separate occasions. In order to eliminate any technological factors which could affect data reliability and consistency all observers reviewed images at the same time. To optimise viewing conditions ambient lighting was maintained between 25-40 lux, with monitors calibrated to the GSDF standard to ensure consistent brightness and contrast. Kappa (κ) analysis of the grades assigned was used to measure inter- and intra-observer reliability. Intra-observer agreement had a moderate mean κ-value of 0.55, with individual comparisons ranging from 0.30 to 0.86. Two images saved from the same patient, during the same scan, were each graded as I, II and III by the same observer. A mean κ-value of 0.30 (range from 0.13 to 0.55) indicated fair inter-observer agreement over the two occasions and only one image was graded consistently the same by all five observers. The study findings confirmed the lack of reproducibility associated with Grannum grading of the placenta despite optimal viewing conditions and highlight the need for new methods of assessing placental health in order to improve neonatal outcomes. Alternative methods for quantifying placental calcification such as a software based technique and 3D ultrasound assessment need to be explored.

  5. Datathons and Software to Promote Reproducible Research

    PubMed Central

    2016-01-01

    Background Datathons facilitate collaboration between clinicians, statisticians, and data scientists in order to answer important clinical questions. Previous datathons have resulted in numerous publications of interest to the critical care community and serve as a viable model for interdisciplinary collaboration. Objective We report on an open-source software called Chatto that was created by members of our group, in the context of the second international Critical Care Datathon, held in September 2015. Methods Datathon participants formed teams to discuss potential research questions and the methods required to address them. They were provided with the Chatto suite of tools to facilitate their teamwork. Each multidisciplinary team spent the next 2 days with clinicians working alongside data scientists to write code, extract and analyze data, and reformulate their queries in real time as needed. All projects were then presented on the last day of the datathon to a panel of judges that consisted of clinicians and scientists. Results Use of Chatto was particularly effective in the datathon setting, enabling teams to reduce the time spent configuring their research environments to just a few minutes—a process that would normally take hours to days. Chatto continued to serve as a useful research tool after the conclusion of the datathon. Conclusions This suite of tools fulfills two purposes: (1) facilitation of interdisciplinary teamwork through archiving and version control of datasets, analytical code, and team discussions, and (2) advancement of research reproducibility by functioning postpublication as an online environment in which independent investigators can rerun or modify analyses with relative ease. With the introduction of Chatto, we hope to solve a variety of challenges presented by collaborative data mining projects while improving research reproducibility. PMID:27558834

  6. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  7. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  8. Development of a facility for high-precision irradiation of cells with carbon ions

    SciTech Connect

    Goethem, Marc-Jan van; Niemantsverdriet, Maarten; Brandenburg, Sytze; Langendijk, Johannes A.; Coppes, Robert P.; Luijk, Peter van

    2011-01-15

    the irradiation of cell samples with the specified accuracy. Measurements of the transverse and longitudinal dose distribution showed that the dose variation over the sample volume was {+-}0.8% and {+-}0.7% in the lateral and longitudinal directions, respectively. The track-averaged LET of 132{+-}10 keV/{mu}m and dose-averaged LET of 189{+-}15 keV/{mu}m at the position of the sample were obtained from a GEANT4 simulation, which was validated experimentally. Three separately measured cell-survival curves yielded nearly identical results. Conclusions: With the new facility, high-precision carbon-ion irradiations of biological samples can be performed with highly reproducible results.

  9. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  10. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  11. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  12. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Dunham, E. W.; Wei, M. Z.; Robinson, L. B.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 105. Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  13. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Robinson, L. B.; Wei, M. Z.; Borucki, W. J.; Dunham, E. W.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 10(exp 5). Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  14. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  15. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  16. Precision medicine in cardiology.

    PubMed

    Antman, Elliott M; Loscalzo, Joseph

    2016-10-01

    The cardiovascular research and clinical communities are ideally positioned to address the epidemic of noncommunicable causes of death, as well as advance our understanding of human health and disease, through the development and implementation of precision medicine. New tools will be needed for describing the cardiovascular health status of individuals and populations, including 'omic' data, exposome and social determinants of health, the microbiome, behaviours and motivations, patient-generated data, and the array of data in electronic medical records. Cardiovascular specialists can build on their experience and use precision medicine to facilitate discovery science and improve the efficiency of clinical research, with the goal of providing more precise information to improve the health of individuals and populations. Overcoming the barriers to implementing precision medicine will require addressing a range of technical and sociopolitical issues. Health care under precision medicine will become a more integrated, dynamic system, in which patients are no longer a passive entity on whom measurements are made, but instead are central stakeholders who contribute data and participate actively in shared decision-making. Many traditionally defined diseases have common mechanisms; therefore, elimination of a siloed approach to medicine will ultimately pave the path to the creation of a universal precision medicine environment.

  17. Modeling the spectrum of the 2ν2 and ν4 states of ammonia to experimental accuracy

    NASA Astrophysics Data System (ADS)

    Pearson, John C.; Yu, Shanshan; Pirali, Olivier

    2016-09-01

    The vibrational spectrum of ammonia has received an enormous amount of attention due to its potential prevalence in hot exo-planet atmospheres and persistent challenges in assigning and modeling highly excited and often highly perturbed states. Effective Hamiltonian models face challenges due to strong coupling between the large amplitude inversion and the other small amplitude vibrations. To date, only the ground and ν2 positions could be modeled to experimental accuracy using effective Hamiltonians. Several previous attempts to analyze the 2ν2 and ν4 energy levels failed to model both the microwave and infrared transitions to experimental accuracy. In this work, we performed extensive experimental measurements and data analysis for the 2ν2 and ν4 inversion-rotation and vibrational transitions. We measured 159 new transition frequencies with microwave precision and assigned 1680 new ones from existing Fourier transform spectra recorded in Synchrotron SOLEIL. The newly assigned data significantly expand the range of assigned quantum numbers; combined with all the previously published high-resolution data, the 2ν2 and ν4 states are reproduced to experimental accuracy using a global model described here. Achieving experimental accuracy required inclusion of a number of terms in the effective Hamiltonian that were neglected in previous work. These terms have also been neglected in the analysis of states higher than 2ν2 and ν4 suggesting that the inversion-rotation-vibration spectrum of ammonia may be far more tractable to effective Hamiltonians than previously believed.

  18. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  19. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  20. Reproducibility of the sella turcica landmark in three dimensions using a sella turcica-specific reference system

    PubMed Central

    Jacobs, Reinhilde; Odri, Guillaume A.; Vasconcelos, Karla de Faria; Willems, Guy; Olszewski, Raphaël

    2015-01-01

    Purpose This study was performed to assess the reproducibility of identifying the sella turcica landmark in a three-dimensional (3D) model by using a new sella-specific landmark reference system. Materials and Methods Thirty-two cone-beam computed tomographic scans (3D Accuitomo® 170, J. Morita, Kyoto, Japan) were retrospectively collected. The 3D data were exported into the Digital Imaging and Communications in Medicine standard and then imported into the Maxilim® software (Medicim NV, Sint-Niklaas, Belgium) to create 3D surface models. Five observers identified four osseous landmarks in order to create the reference frame and then identified two sella landmarks. The x, y, and z coordinates of each landmark were exported. The observations were repeated after four weeks. Statistical analysis was performed using the multiple paired t-test with Bonferroni correction (intraobserver precision: p<0.005, interobserver precision: p<0.0011). Results The intraobserver mean precision of all landmarks was <1 mm. Significant differences were found when comparing the intraobserver precision of each observer (p<0.005). For the sella landmarks, the intraobserver mean precision ranged from 0.43±0.34 mm to 0.51±0.46 mm. The intraobserver reproducibility was generally good. The overall interobserver mean precision was <1 mm. Significant differences between each pair of observers for all anatomical landmarks were found (p<0.0011). The interobserver reproducibility of sella landmarks was good, with >50% precision in locating the landmark within 1 mm. Conclusion A newly developed reference system offers high precision and reproducibility for sella turcica identification in a 3D model without being based on two-dimensional images derived from 3D data. PMID:25793179

  1. REPRODUCIBLE AND SHAREABLE QUANTIFICATIONS OF PATHOGENICITY

    PubMed Central

    Manrai, Arjun K; Wang, Brice L; Patel, Chirag J; Kohane, Isaac S

    2016-01-01

    There are now hundreds of thousands of pathogenicity assertions that relate genetic variation to disease, but most of this clinically utilized variation has no accepted quantitative disease risk estimate. Recent disease-specific studies have used control sequence data to reclassify large amounts of prior pathogenic variation, but there is a critical need to scale up both the pace and feasibility of such pathogenicity reassessments across human disease. In this manuscript we develop a shareable computational framework to quantify pathogenicity assertions. We release a reproducible “digital notebook” that integrates executable code, text annotations, and mathematical expressions in a freely accessible statistical environment. We extend previous disease-specific pathogenicity assessments to over 6,000 diseases and 160,000 assertions in the ClinVar database. Investigators can use this platform to prioritize variants for reassessment and tailor genetic model parameters (such as prevalence and heterogeneity) to expose the uncertainty underlying pathogenicity-based risk assessments. Finally, we release a website that links users to pathogenic variation for a queried disease, supporting literature, and implied disease risk calculations subject to user-defined and disease-specific genetic risk models in order to facilitate variant reassessments. PMID:26776189

  2. Reproducibility and reliability of fetal cardiac time intervals using magnetocardiography.

    PubMed

    van Leeuwen, P; Lange, S; Klein, A; Geue, D; Zhang, Y; Krause, H J; Grönemeyer, D

    2004-04-01

    We investigated several factors which may affect the accuracy of fetal cardiac time intervals (CTI) determined in magnetocardiographic (MCG) recordings: observer differences, the number of available recording sites and the type of sensor used in acquisition. In 253 fetal MCG recordings, acquired using different biomagnetometer devices between the 15th and 42nd weeks of gestation, P-wave, QRS complex and T-wave onsets and ends were identified in signal averaged data sets independently by different observers. Using a defined procedure for setting signal events, interobserver reliability was high. Increasing the number of registration sites led to more accurate identification of the events. The differences in wave morphology between magnetometer and gradiometer configurations led to deviations in timing whereas the differences between low and high temperature devices seemed to be primarily due to noise. Signal-to-noise ratio played an important overall role in the accurate determination of CTI and changes in signal amplitude associated with fetal maturation may largely explain the effects of gestational age on reproducibility. As fetal CTI may be of value in the identification of pathologies such as intrauterine growth retardation or fetal cardiac hypertrophy, their reliable estimation will be enhanced by strategies which take these factors into account.

  3. Reproducibility of measurements of trace gas concentrations in expired air.

    PubMed

    Strocchi, A; Ellis, C; Levitt, M D

    1991-07-01

    Measurement of the pulmonary excretion of trace gases has been used as a simple means of assessing metabolic reactions. End alveolar trace gas concentration, rather than excretory rate, is usually measured. However, the reproducibility of this measurement has received little attention. In 17 healthy subjects, duplicate collections of alveolar air were obtained within 1 minute of each other using a commercially available alveolar air sampler. The concentrations of hydrogen, methane, carbon monoxide, and carbon dioxide were measured. When the subject received no instruction on how to expire into the device, a difference of 28% +/- 19% (1SD) was found between duplicate determinations of hydrogen. Instructing the subjects to avoid hyperventilation or to inspire maximally and exhale immediately resulted in only minor reduction in variability. However, a maximal inspiration held for 15 seconds before exhalation reduced the difference to a mean of 9.6% +/- 8.0%, less than half that observed with the other expiratory techniques. Percentage difference of methane measurements with the four different expiratory techniques yielded results comparable to those obtained for hydrogen. In contrast, percentage differences for carbon monoxide measurements were similar for all expiratory techniques. When normalized to a PCO2 of 5%, the variability of hydrogen measurements with the breath-holding technique was reduced to 6.8% +/- 4.7%, a value significantly lower than that obtained with the other expiratory methods. This study suggests that attention to the expiratory technique could improve the accuracy of tests using breath hydrogen measurements.

  4. The Paradox of Abstraction: Precision Versus Concreteness.

    PubMed

    Iliev, Rumen; Axelrod, Robert

    2016-11-22

    We introduce a novel measure of abstractness based on the amount of information of a concept computed from its position in a semantic taxonomy. We refer to this measure as precision. We propose two alternative ways to measure precision, one based on the path length from a concept to the root of the taxonomic tree, and another one based on the number of direct and indirect descendants. Since more information implies greater processing load, we hypothesize that nouns higher in precision will have a processing disadvantage in a lexical decision task. We contrast precision to concreteness, a common measure of abstractness based on the proportion of sensory-based information associated with a concept. Since concreteness facilitates cognitive processing, we predict that while both concreteness and precision are measures of abstractness, they will have opposite effects on performance. In two studies we found empirical support for our hypothesis. Precision and concreteness had opposite effects on latency and accuracy in a lexical decision task, and these opposite effects were observable while controlling for word length, word frequency, affective content and semantic diversity. Our results support the view that concepts organization includes amodal semantic structures which are independent of sensory information. They also suggest that we should distinguish between sensory-based and amount-of-information-based abstractness.

  5. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  6. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    PubMed Central

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  7. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy.

    PubMed

    Mugge, Winfred; Kuling, Irene A; Brenner, Eli; Smeets, Jeroen B J

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects' errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints.

  8. Parameter Accuracy in Meta-Analyses of Factor Structures

    ERIC Educational Resources Information Center

    Gnambs, Timo; Staufenbiel, Thomas

    2016-01-01

    Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…

  9. Precision-controlled elution of a 82Sr/82Rb generator for cardiac perfusion imaging with positron emission tomography

    NASA Astrophysics Data System (ADS)

    Klein, R.; Adler, A.; Beanlands, R. S.; de Kemp, R. A.

    2007-02-01

    A rubidium-82 (82Rb) elution system is described for use with positron emission tomography. Due to the short half-life of 82Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a 82Sr/82Rb generator and a bypass line to achieve a constant-activity elution of 82Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The 82Rb elution system produces accurate and reproducible constant-activity elution profiles of 82Rb activity, independent of parent 82Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using 82Rb.

  10. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  11. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  12. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  13. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  14. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  15. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  16. A primer on precision medicine informatics.

    PubMed

    Sboner, Andrea; Elemento, Olivier

    2016-01-01

    In this review, we describe key components of a computational infrastructure for a precision medicine program that is based on clinical-grade genomic sequencing. Specific aspects covered in this review include software components and hardware infrastructure, reporting, integration into Electronic Health Records for routine clinical use and regulatory aspects. We emphasize informatics components related to reproducibility and reliability in genomic testing, regulatory compliance, traceability and documentation of processes, integration into clinical workflows, privacy requirements, prioritization and interpretation of results to report based on clinical needs, rapidly evolving knowledge base of genomic alterations and clinical treatments and return of results in a timely and predictable fashion. We also seek to differentiate between the use of precision medicine in germline and cancer.

  17. Accurate, reproducible measurement of blood pressure.

    PubMed Central

    Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791

  18. Semiautomated, Reproducible Batch Processing of Soy

    NASA Technical Reports Server (NTRS)

    Thoerne, Mary; Byford, Ivan W.; Chastain, Jack W.; Swango, Beverly E.

    2005-01-01

    A computer-controlled apparatus processes batches of soybeans into one or more of a variety of food products, under conditions that can be chosen by the user and reproduced from batch to batch. Examples of products include soy milk, tofu, okara (an insoluble protein and fiber byproduct of soy milk), and whey. Most processing steps take place without intervention by the user. This apparatus was developed for use in research on processing of soy. It is also a prototype of other soy-processing apparatuses for research, industrial, and home use. Prior soy-processing equipment includes household devices that automatically produce soy milk but do not automatically produce tofu. The designs of prior soy-processing equipment require users to manually transfer intermediate solid soy products and to press them manually and, hence, under conditions that are not consistent from batch to batch. Prior designs do not afford choices of processing conditions: Users cannot use previously developed soy-processing equipment to investigate the effects of variations of techniques used to produce soy milk (e.g., cold grinding, hot grinding, and pre-cook blanching) and of such process parameters as cooking times and temperatures, grinding times, soaking times and temperatures, rinsing conditions, and sizes of particles generated by grinding. In contrast, the present apparatus is amenable to such investigations. The apparatus (see figure) includes a processing tank and a jacketed holding or coagulation tank. The processing tank can be capped by either of two different heads and can contain either of two different insertable mesh baskets. The first head includes a grinding blade and heating elements. The second head includes an automated press piston. One mesh basket, designated the okara basket, has oblong holes with a size equivalent to about 40 mesh [40 openings per inch (.16 openings per centimeter)]. The second mesh basket, designated the tofu basket, has holes of 70 mesh [70 openings

  19. Research Reproducibility in Geosciences: Current Landscape, Practices and Perspectives

    NASA Astrophysics Data System (ADS)

    Yan, An

    2016-04-01

    Reproducibility of research can gauge the validity of its findings. Yet currently we lack understanding of how much of a problem research reproducibility is in geosciences. We developed an online survey on faculty and graduate students in geosciences, and received 136 responses from research institutions and universities in Americas, Asia, Europe and other parts of the world. This survey examined (1) the current state of research reproducibility in geosciences by asking researchers' experiences with unsuccessful replication work, and what obstacles that lead to their replication failures; (2) the current reproducibility practices in community by asking what efforts researchers made to try to reproduce other's work and make their own work reproducible, and what the underlying factors that contribute to irreproducibility are; (3) the perspectives on reproducibility by collecting researcher's thoughts and opinions on this issue. The survey result indicated that nearly 80% of respondents who had ever reproduced a published study had failed at least one time in reproducing. Only one third of the respondents received helpful feedbacks when they contacted the authors of a published study for data, code, or other information. The primary factors that lead to unsuccessful replication attempts are insufficient details of instructions in published literature, and inaccessibility of data, code and tools needed in the study. Our findings suggest a remarkable lack of research reproducibility in geoscience. Changing the incentive mechanism in academia, as well as developing policies and tools that facilitate open data and code sharing are the promising ways for geosciences community to alleviate this reproducibility problem.

  20. Teaching with Precision.

    ERIC Educational Resources Information Center

    Raybould, Ted; Solity, Jonathan

    1982-01-01

    Use of precision teaching principles with learning problem students involves five steps: specifying performance, recording daily behavior, charting daily behavior, recording the teaching approach, and analyzing data. The approach has been successfully implemented through consultation of school psychologists in Walsall, England. (CL)

  1. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  2. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  3. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  4. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  5. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  6. Investigating the ultimate accuracy of Doppler-broadening thermometry by means of a global fitting procedure

    NASA Astrophysics Data System (ADS)

    Amodio, Pasquale; De Vizia, Maria Domenica; Moretti, Luigi; Gianfrani, Livio

    2015-09-01

    Doppler-limited, high-precision, molecular spectroscopy in the linear regime of interaction may refine our knowledge of the Boltzmann constant. To this end, the global uncertainty in the retrieval of the Doppler width should be reduced down to 1 part over 106, which is a rather challenging target. So far, Doppler-broadening thermometry has been mostly limited by the uncertainty associated to the line shape model that is adopted for the nonlinear least-squares fits of experimental spectra. In this paper, we deeply investigate this issue by using a very realistic and sophisticated model, known as partially correlated speed-dependent Keilson-Storer profile, to reproduce near-infrared water spectra. A global approach has been developed to fit a large number of numerically simulated spectra, testing a variety of simplified line-shape models. It turns out that the most appropriate model is the speed-dependent hard-collision profile. We demonstrate that the Doppler width can be determined with relative precision and accuracy, respectively, of 0.42 and 0.75 part per million.

  7. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  8. [Accuracy of HDL cholesterol measurements].

    PubMed

    Niedmann, P D; Luthe, H; Wieland, H; Schaper, G; Seidel, D

    1983-02-01

    The widespread use of different methods for the determination of HDL-cholesterol (in Europe: sodium phosphotungstic acid/MgCl2) in connection with enzymatic procedures (in the USA: heparin/MnCl2 followed by the Liebermann-Burchard method) but common reference values makes it necessary to evaluate not only accuracy, specificity, and precision of the precipitation step but also of the subsequent cholesterol determination. A high ratio of serum vs. concentrated precipitation reagent (10:1 V/V) leads to the formation of variable amounts of delta-3.5-cholestadiene. This substance is not recognized by cholesterol oxidase but leads to an 1.6 times overestimation by the Liebermann-Burchard method. Therefore, errors in HDL-cholesterol determination should be considered and differences up to 30% may occur between HDL-cholesterol values determined by the different techniques (heparin/MnCl2 - Liebermann-Burchard and NaPW/MgCl2-CHOD-PAP).

  9. Measurement accuracy and Cerenkov removal for high performance, high spatial resolution scintillation dosimetry

    SciTech Connect

    Archambault, Louis; Beddar, A. Sam; Gingras, Luc

    2006-01-15

    With highly conformal radiation therapy techniques such as intensity-modulated radiation therapy, radiosurgery, and tomotherapy becoming more common in clinical practice, the use of these narrow beams requires a higher level of precision in quality assurance and dosimetry. Plastic scintillators with their water equivalence, energy independence, and dose rate linearity have been shown to possess excellent qualities that suit the most complex and demanding radiation therapy treatment plans. The primary disadvantage of plastic scintillators is the presence of Cerenkov radiation generated in the light guide, which results in an undesired stem effect. Several techniques have been proposed to minimize this effect. In this study, we compared three such techniques--background subtraction, simple filtering, and chromatic removal--in terms of reproducibility and dose accuracy as gauges of their ability to remove the Cerenkov stem effect from the dose signal. The dosimeter used in this study comprised a 6-mm{sup 3} plastic scintillating fiber probe, an optical fiber, and a color charge-coupled device camera. The whole system was shown to be linear and the total light collected by the camera was reproducible to within 0.31% for 5-s integration time. Background subtraction and chromatic removal were both found to be suitable for precise dose evaluation, with average absolute dose discrepancies of 0.52% and 0.67%, respectively, from ion chamber values. Background subtraction required two optical fibers, but chromatic removal used only one, thereby preventing possible measurement artifacts when a strong dose gradient was perpendicular to the optical fiber. Our findings showed that a plastic scintillation dosimeter could be made free of the effect of Cerenkov radiation.

  10. The effect of a range of disinfectants on the dimensional accuracy of some impression materials.

    PubMed

    Jagger, D C; Al Jabra, O; Harrison, A; Vowles, R W; McNally, L

    2004-12-01

    In this study the dimensional accuracy of two model materials; dental stone and plaster of Paris, reproduced from three commonly used impression materials; alginate, polyether and addition-cured silicone, retained by their adhesives in acrylic resin trays and exposed to four disinfectant solutions was evaluated. Ninety casts were used to investigate the effect of the four disinfectants on the dimensional accuracy of alginate, polyether and addition-cured silicone impression material. For each impression material 30 impressions were taken, half were poured in dental stone and half in plaster of Paris. The disinfectants used were Dimenol, Perform-ID, MD-520, and Haz-tabs. Measurements were carried out using a High Precision Reflex Microscope. For the alginate impressions only those disinfected by 5-minute immersion in Haz-tabs solution and in full-strength MD 520 were not adversely affected by the disinfection treatment. All polyether impressions subjected to immersion disinfection exhibited a clinically acceptable expansion. Disinfected addition-cured silicone impressions produced very accurate stone casts. Those disinfected by spraying with fill-strength Dimenol produced casts that were very similar to those left as controls, but those treated by immersion disinfection exhibited negligible and clinically acceptable expansion. The results of the studied demonstrated that the various disinfection treatments had different effects on the impression materials. It is important that an appropriate disinfectant is used for each type of impression material.

  11. Aiming for benchmark accuracy with the many-body expansion.

    PubMed

    Richard, Ryan M; Lao, Ka Un; Herbert, John M

    2014-09-16

    Conspectus The past 15 years have witnessed an explosion of activity in the field of fragment-based quantum chemistry, whereby ab initio electronic structure calculations are performed on very large systems by decomposing them into a large number of relatively small subsystem calculations and then reassembling the subsystem data in order to approximate supersystem properties. Most of these methods are based, at some level, on the so-called many-body (or "n-body") expansion, which ultimately requires calculations on monomers, dimers, ..., n-mers of fragments. To the extent that a low-order n-body expansion can reproduce supersystem properties, such methods replace an intractable supersystem calculation with a large number of easily distributable subsystem calculations. This holds great promise for performing, for example, "gold standard" CCSD(T) calculations on large molecules, clusters, and condensed-phase systems. The literature is awash in a litany of fragment-based methods, each with their own working equations and terminology, which presents a formidable language barrier to the uninitiated reader. We have sought to unify these methods under a common formalism, by means of a generalized many-body expansion that provides a universal energy formula encompassing not only traditional n-body cluster expansions but also methods designed for macromolecules, in which the supersystem is decomposed into overlapping fragments. This formalism allows various fragment-based methods to be systematically classified, primarily according to how the fragments are constructed and how higher-order n-body interactions are approximated. This classification furthermore suggests systematic ways to improve the accuracy. Whereas n-body approaches have been thoroughly tested at low levels of theory in small noncovalent clusters, we have begun to explore the efficacy of these methods for large systems, with the goal of reproducing benchmark-quality calculations, ideally meaning complete

  12. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  13. High-precision laser machining of ceramics

    NASA Astrophysics Data System (ADS)

    Toenshoff, Hans K.; von Alvensleben, Ferdinand; Graumann, Christoph; Willmann, Guido

    1998-09-01

    The increasing demand for highly developed ceramic materials for various applications calls for innovative machining technologies yielding high accuracy and efficiency. Associated problems with conventional, i.e. mechanical methods, are unacceptable tool wear as well as force induced damages on ceramic components. Furthermore, the established grinding techniques often meet their limits if accurate complex 2D or 3D structures are required. In contrast to insufficient mechanical processes, UV-laser precision machining of ceramics offers not only a valuable technological alternative but a considerable economical aspect as well. In particular, excimer lasers provide a multitude of advantages for applications in high precision and micro technology. Within the UV wavelength range and pulses emitted in the nano-second region, minimal thermal effects on ceramics and polymers are observed. Thus, the ablation geometry can be controlled precisely in the lateral and vertical directions. In this paper, the excimer laser machining technology developed at the Laser Zentrum Hannover is explained. Representing current and future industrial applications, examinations concerning the precision cutting of alumina (Al2O3), and HF-composite materials, the ablation of ferrite ceramics for precision inductors and the structuring of SiC sealing and bearing rings are presented.

  14. Reproducing American Sign Language sentences: cognitive scaffolding in working memory

    PubMed Central

    Supalla, Ted; Hauser, Peter C.; Bavelier, Daphne

    2014-01-01

    The American Sign Language Sentence Reproduction Test (ASL-SRT) requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall) and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects' recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies when they failed to recall the sentence correctly. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are considered. PMID

  15. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  16. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  17. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  18. Precise strong lensing mass profile of the CLASH galaxy cluster MACS 2129

    NASA Astrophysics Data System (ADS)

    Monna, A.; Seitz, S.; Balestra, I.; Rosati, P.; Grillo, C.; Halkola, A.; Suyu, S. H.; Coe, D.; Caminha, G. B.; Frye, B.; Koekemoer, A.; Mercurio, A.; Nonino, M.; Postman, M.; Zitrin, A.

    2017-01-01

    We present a detailed strong lensing mass reconstruction of the core of the galaxy cluster MACSJ 2129.4-0741 (z_{cl}=0.589) obtained by combining high-resolution HST photometry from the CLASH survey with new spectroscopic observations from the CLASH-VLT survey. A background bright red passive galaxy at z_{sp}=1.36, sextuply lensed in the cluster core, has four radial lensed images located over the three central cluster members. Further 19 background lensed galaxies are spectroscopically confirmed by our VLT survey, including 3 additional multiple systems. A total of 31 multiple images are used in the lensing analysis. This allows us to trace with high precision the total mass profile of the cluster in its very inner region (R<100 kpc). Our final lensing mass model reproduces the multiple images systems identified in the cluster core with high accuracy of 0.4″. This translates to an high precision mass reconstruction of MACS 2129, which is constrained at a level of 2%. The cluster has Einstein parameter ΘE = (29 ± 4)″ and a projected total mass of M_{tot}(<Θ _E)=(1.35± 0.03)× 10^{14}M_{⊙} within such radius. Together with the cluster mass profile, we provide here also the complete spectroscopic dataset for the cluster members and lensed images measured with VLT/VIMOS within the CLASH-VLT survey.

  19. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, Layton Carter

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  20. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  1. Regional analysis of volumes and reproducibilities of automatic and manual hippocampal segmentations

    PubMed Central

    Vrenken, Hugo; Bijma, Fetsje; Barkhof, Frederik; van Herk, Marcel; de Munck, Jan C.

    2017-01-01

    Purpose Precise and reproducible hippocampus outlining is important to quantify hippocampal atrophy caused by neurodegenerative diseases and to spare the hippocampus in whole brain radiation therapy when performing prophylactic cranial irradiation or treating brain metastases. This study aimed to quantify systematic differences between methods by comparing regional volume and outline reproducibility of manual, FSL-FIRST and FreeSurfer hippocampus segmentations. Materials and methods This study used a dataset from ADNI (Alzheimer’s Disease Neuroimaging Initiative), including 20 healthy controls, 40 patients with mild cognitive impairment (MCI), and 20 patients with Alzheimer’s disease (AD). For each subject back-to-back (BTB) T1-weighted 3D MPRAGE images were acquired at time-point baseline (BL) and 12 months later (M12). Hippocampi segmentations of all methods were converted into triangulated meshes, regional volumes were extracted and regional Jaccard indices were computed between the hippocampi meshes of paired BTB scans to evaluate reproducibility. Regional volumes and Jaccard indices were modelled as a function of group (G), method (M), hemisphere (H), time-point (T), region (R) and interactions. Results For the volume data the model selection procedure yielded the following significant main effects G, M, H, T and R and interaction effects G-R and M-R. The same model was found for the BTB scans. For all methods volumes reduces with the severity of disease. Significant fixed effects for the regional Jaccard index data were M, R and the interaction M-R. For all methods the middle region was most reproducible, independent of diagnostic group. FSL-FIRST was most and FreeSurfer least reproducible. Discussion/Conclusion A novel method to perform detailed analysis of subtle differences in hippocampus segmentation is proposed. The method showed that hippocampal segmentation reproducibility was best for FSL-FIRST and worst for Freesurfer. We also found systematic

  2. Precision disablement aiming system

    SciTech Connect

    Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott

    2016-02-16

    A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.

  3. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  4. Precision laser aiming system

    DOEpatents

    Ahrens, Brandon R.; Todd, Steven N.

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  5. Precision orbit determination of altimetric satellites

    NASA Technical Reports Server (NTRS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-01-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  6. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay.

    PubMed

    Liao, Hongen; Ishihara, Hirotaka; Tran, Huy Hoang; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi

    2010-01-01

    This paper describes a precision-guided surgical navigation system for minimally invasive surgery. The system combines a laser guidance technique with a three-dimensional (3D) autostereoscopic image overlay technique. Images of surgical anatomic structures superimposed onto the patient are created by employing an animated imaging method called integral videography (IV), which can display geometrically accurate 3D autostereoscopic images and reproduce motion parallax without the need for special viewing or tracking devices. To improve the placement accuracy of surgical instruments, we integrated an image overlay system with a laser guidance system for alignment of the surgical instrument and better visualization of patient's internal structure. We fabricated a laser guidance device and mounted it on an IV image overlay device. Experimental evaluations showed that the system could guide a linear surgical instrument toward a target with an average error of 2.48 mm and standard deviation of 1.76 mm. Further improvement to the design of the laser guidance device and the patient-image registration procedure of the IV image overlay will make this system practical; its use would increase surgical accuracy and reduce invasiveness.

  7. High Precision GPS Measurements

    DTIC Science & Technology

    2010-02-28

    troposphere delays with cm-level accuracy [15]. For example, the modified Hopfield model (MHM) has been shown to accurately calculate both the...differences between two locations near Rayleigh, North Carolina; RALR and NCRD which are part of the network of Continuously Operating Reference...Fritsche, M., R. Dietrich, A. Rulke, M. Rothacher, R. Steigenberger, “Impact of higher-order ionosphere terms on GPS-derived global network solutions

  8. Reproducibility of cerebral tissue oxygen saturation measurements by near-infrared spectroscopy in newborn infants

    NASA Astrophysics Data System (ADS)

    Jenny, Carmen; Biallas, Martin; Trajkovic, Ivo; Fauchère, Jean-Claude; Bucher, Hans Ulrich; Wolf, Martin

    2011-09-01

    Early detection of cerebral hypoxemia is an important aim in neonatology. A relevant parameter to assess brain oxygenation may be the cerebral tissue oxygen saturation (StO2) measured by near-infrared spectroscopy (NIRS). So far the reproducibility of StO2 measurements was too low for clinical application, probably due to inhomogeneities. The aim of this study was to test a novel sensor geometry which reduces the influence of inhomogeneities. Thirty clinically stable newborn infants, with a gestational age of median 33.9 (range 26.9 to 41.9) weeks, birth weight of 2220 (820 to 4230) g, postnatal age of 5 (1 to 71) days were studied. At least four StO2 measurements of 1 min duration were carried out using NIRS on the lateral head. The sensor was repositioned between measurements. Reproducibility was calculated by a linear mixed effects model. The mean StO2 was 79.99 +/- 4.47% with a reproducibility of 2.76% and a between-infant variability of 4.20%. Thus, the error of measurement only accounts for 30.1% of the variability. The novel sensor geometry leads to considerably more precise measurements compared to previous studies with, e.g., ~5% reproducibility for the NIRO 300. The novel StO2 values hence have a higher clinical relevance.

  9. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  10. Virtual Reference Environments: a simple way to make research reproducible

    PubMed Central

    Hurley, Daniel G.; Budden, David M.

    2015-01-01

    Reproducible research’ has received increasing attention over the past few years as bioinformatics and computational biology methodologies become more complex. Although reproducible research is progressing in several valuable ways, we suggest that recent increases in internet bandwidth and disk space, along with the availability of open-source and free-software licences for tools, enable another simple step to make research reproducible. In this article, we urge the creation of minimal virtual reference environments implementing all the tools necessary to reproduce a result, as a standard part of publication. We address potential problems with this approach, and show an example environment from our own work. PMID:25433467

  11. Virtual Reference Environments: a simple way to make research reproducible.

    PubMed

    Hurley, Daniel G; Budden, David M; Crampin, Edmund J

    2015-09-01

    'Reproducible research' has received increasing attention over the past few years as bioinformatics and computational biology methodologies become more complex. Although reproducible research is progressing in several valuable ways, we suggest that recent increases in internet bandwidth and disk space, along with the availability of open-source and free-software licences for tools, enable another simple step to make research reproducible. In this article, we urge the creation of minimal virtual reference environments implementing all the tools necessary to reproduce a result, as a standard part of publication. We address potential problems with this approach, and show an example environment from our own work.

  12. Precisely Patterned Growth of Ultra-Long Single-Crystalline Organic Microwire Arrays for Near-Infrared Photodetectors.

    PubMed

    Wang, Hui; Deng, Wei; Huang, Liming; Zhang, Xiujuan; Jie, Jiansheng

    2016-03-01

    Owing to extraordinary properties, small-molecule organic micro/nanocrystals are identified to be prospective system to construct new-generation organic electronic and optoelectronic devices. Alignment and patterning of organic micro/nanocrystals at desired locations are prerequisite for their device applications in practice. Though various methods have been developed to control their directional growth and alignment, high-throughput precise positioning and patterning of the organic micro/nanocrystals at desired locations remains a challenge. Here, we report a photoresist-assisted evaporation method for large-area growth of precisely positioned ultralong methyl-squarylium (MeSq) microwire (MW) arrays. Positions as well as alignment densities of the MWs can be precisely controlled with the aid of the photoresist-template that fabricated by photolithography process. This strategy enables large-scale fabrication of organic MW arrays with nearly the same accuracy, uniformity, and reliability as photolithography. Near-infrared (NIR) photodetectors based on the MeSq MW arrays show excellent photoresponse behavior and are capable of detecting 808 nm light with high stability and reproducibility. The high on/off ratio of 1600 is significantly better than other organic nanostructure-based optical switchers. More importantly, this strategy can be readily extended to other organic molecules, revealing the great potential of photoresist-assisted evaporation method for future high-performance organic optoelectronic devices.

  13. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer.

    PubMed

    Kondo, Kosuke; Nemoto, Masaaki; Masuda, Hiroyuki; Okonogi, Shinichi; Nomoto, Jun; Harada, Naoyuki; Sugo, Nobuo; Miyazaki, Chikao

    2015-01-01

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p < 0.001). A significant difference was also noted in the longitudinal diameter of the cerebral aneurysm (p < 0.01). Regarding the CTA image as the gold standard, reproducibility of the microsurgical anatomy of skull bone and main arteries was favorable in the rapid prototyping models prepared using a 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors.

  14. Anatomical Reproducibility of a Head Model Molded by a Three-dimensional Printer

    PubMed Central

    KONDO, Kosuke; NEMOTO, Masaaki; MASUDA, Hiroyuki; OKONOGI, Shinichi; NOMOTO, Jun; HARADA, Naoyuki; SUGO, Nobuo; MIYAZAKI, Chikao

    We prepared rapid prototyping models of heads with unruptured cerebral aneurysm based on image data of computed tomography angiography (CTA) using a three-dimensional (3D) printer. The objective of this study was to evaluate the anatomical reproducibility and accuracy of these models by comparison with the CTA images on a monitor. The subjects were 22 patients with unruptured cerebral aneurysm who underwent preoperative CTA. Reproducibility of the microsurgical anatomy of skull bone and arteries, the length and thickness of the main arteries, and the size of cerebral aneurysm were compared between the CTA image and rapid prototyping model. The microsurgical anatomy and arteries were favorably reproduced, apart from a few minute regions, in the rapid prototyping models. No significant difference was noted in the measured lengths of the main arteries between the CTA image and rapid prototyping model, but errors were noted in their thickness (p < 0.001). A significant difference was also noted in the longitudinal diameter of the cerebral aneurysm (p < 0.01). Regarding the CTA image as the gold standard, reproducibility of the microsurgical anatomy of skull bone and main arteries was favorable in the rapid prototyping models prepared using a 3D printer. It was concluded that these models are useful tools for neurosurgical simulation. The thickness of the main arteries and size of cerebral aneurysm should be comprehensively judged including other neuroimaging in consideration of errors. PMID:26119896

  15. A new paradigm for reproducing and analyzing N-body simulations of planetary systems

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2017-01-01

    The reproducibility of experiments is one of the main principles of the scientific method. However, numerical N-body experiments, especially those of planetary systems, are currently not reproducible. In the most optimistic scenario, they can only be replicated in an approximate or statistical sense. Even if authors share their full source code and initial conditions, differences in compilers, libraries, operating systems or hardware often lead to qualitatively different results. We provide a new set of easy-to-use, open-source tools that address the above issues, allowing for exact (bit-by-bit) reproducibility of N-body experiments. In addition to generating completely reproducible integrations, we show that our framework also offers novel and innovative ways to analyze these simulations. As an example, we present a high-accuracy integration of the Solar System spanning 10 Gyrs, requiring several weeks to run on a modern CPU. In our framework we can not only easily access simulation data at predefined intervals for which we save snapshots, but at any time during the integration. We achieve this by integrating an on-demand reconstructed simulation forward in time from the nearest snapshot. This allows us to extract arbitrary quantities at any point in the saved simulation exactly (bit-by-bit), and within seconds rather than weeks. We believe that the tools we present in this paper offer a new paradigm for how N-body simulations are run, analyzed, and shared across the community.

  16. Rapid and reproducible determination of active gibberellins in citrus tissues by UPLC/ESI-MS/MS.

    PubMed

    Manzi, Matías; Gómez-Cadenas, Aurelio; Arbona, Vicent

    2015-09-01

    Phytohormone determination is crucial to explain the physiological mechanisms during growth and development. Therefore, rapid and precise methods are needed to achieve reproducible determination of phytohormones. Among many others, gibberellins (GAs) constitute a family of complex analytes as most of them share similar structure and chemical properties although only a few hold biological activity (namely GA1; GA3; GA4 and GA7). A method has been developed to extract GAs from plant tissues by mechanical disruption using ultrapure water as solvent and, in this way, ion suppression was reduced whereas sensitivity increased. Using this methodology, the four active GAs were separated and quantified by UPLC coupled to MS/MS using the isotope-labeled internal standards [(2)H2]-GA1 and [(2)H2]-GA4. To sum up, the new method provides a fast and reproducible protocol to determine bioactive GAs at low concentrations, using minimal amounts of sample and reducing the use of organic solvents.

  17. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  18. Indirect monitoring shot-to-shot shock waves strength reproducibility during pump-probe experiments

    NASA Astrophysics Data System (ADS)

    Pikuz, T. A.; Faenov, A. Ya.; Ozaki, N.; Hartley, N. J.; Albertazzi, B.; Matsuoka, T.; Takahashi, K.; Habara, H.; Tange, Y.; Matsuyama, S.; Yamauchi, K.; Ochante, R.; Sueda, K.; Sakata, O.; Sekine, T.; Sato, T.; Umeda, Y.; Inubushi, Y.; Yabuuchi, T.; Togashi, T.; Katayama, T.; Yabashi, M.; Harmand, M.; Morard, G.; Koenig, M.; Zhakhovsky, V.; Inogamov, N.; Safronova, A. S.; Stafford, A.; Skobelev, I. Yu.; Pikuz, S. A.; Okuchi, T.; Seto, Y.; Tanaka, K. A.; Ishikawa, T.; Kodama, R.

    2016-07-01

    We present an indirect method of estimating the strength of a shock wave, allowing on line monitoring of its reproducibility in each laser shot. This method is based on a shot-to-shot measurement of the X-ray emission from the ablated plasma by a high resolution, spatially resolved focusing spectrometer. An optical pump laser with energy of 1.0 J and pulse duration of ˜660 ps was used to irradiate solid targets or foils with various thicknesses containing Oxygen, Aluminum, Iron, and Tantalum. The high sensitivity and resolving power of the X-ray spectrometer allowed spectra to be obtained on each laser shot and to control fluctuations of the spectral intensity emitted by different plasmas with an accuracy of ˜2%, implying an accuracy in the derived electron plasma temperature of 5%-10% in pump-probe high energy density science experiments. At nano- and sub-nanosecond duration of laser pulse with relatively low laser intensities and ratio Z/A ˜ 0.5, the electron temperature follows Te ˜ Ilas2/3. Thus, measurements of the electron plasma temperature allow indirect estimation of the laser flux on the target and control its shot-to-shot fluctuation. Knowing the laser flux intensity and its fluctuation gives us the possibility of monitoring shot-to-shot reproducibility of shock wave strength generation with high accuracy.

  19. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  20. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  1. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  2. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  3. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  4. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity.

  5. CD-SEM precision: improved procedure and analysis

    NASA Astrophysics Data System (ADS)

    Menaker, Mina

    1999-06-01

    Accurate precision assessment becomes increasingly important as we proceed along the SIA road map, in to more advanced processes and smaller critical dimensions. Accurate precision is necessary in order to determine the P/T ratio which is used to decide whether a specific CD-SEM is valid for controlling a specific process. The customer's needs, as been presented by the SEMATECH Advanced Metrology Advisory Group, are to receive a detailed precision report, in the form of a full repeatability and reproducibility (RR) analysis. The 3 sigma single tool RR, of an in-line SEM, are determined in the same operational modes as used in production, and should include the effects of time and process variants on the SEM performance. We hereby present an RR procedure by a modulate approach which enables the user extending the evaluation according to her/his needs. It includes direct assessment of repeatability, reproducibility and stability analysis. It also allows for a study of wafer non homogeneity, induced process variation and a measured feature type effect on precision. The procedure is based on the standard ISO RR procedure, and includes a modification for a correct compensation for bias, or so called measurement turned. A close examination of the repeatability and reproducibility variations, provides insight to the possible sources of those variations, such as S/N ratio, SEM autofocus mechanism, automation etc. For example, poor wafer alignment might not effect the repeatability, but severally reduce reproducibility. Therefore the analysis is a key to better understanding and improving of CD-SEM performance, on production layers. The procedure is fully implemented on an automated CD-SEM, providing on line precision assessment. RR < 1 nm has been demonstrated on well defined resist and etched structures. Examples of the automatic analysis results, using the new procedure are presented.

  6. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  7. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  8. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  9. An Open Science and Reproducible Research Primer for Landscape Ecologists

    EPA Science Inventory

    In recent years many funding agencies, some publishers, and even the United States government have enacted policies that encourage open science and strive for reproducibility; however, the knowledge and skills to implement open science and enable reproducible research are not yet...

  10. 10 CFR 1016.35 - Authority to reproduce Restricted Data.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Authority to reproduce Restricted Data. 1016.35 Section 1016.35 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Control of... Restricted Data may be reproduced to the minimum extent necessary consistent with efficient operation...

  11. 10 CFR 1016.35 - Authority to reproduce Restricted Data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Authority to reproduce Restricted Data. 1016.35 Section 1016.35 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA Control of... Restricted Data may be reproduced to the minimum extent necessary consistent with efficient operation...

  12. Attenuated Total Reflection (ATR) Sampling in Infrared Spectroscopy of Heterogeneous Materials Requires Reproducible Pressure Control.

    PubMed

    Lu, Zhenyu; Cassidy, Brianna M; DeJong, Stephanie A; Belliveau, Raymond G; Myrick, Michael L; Morgan, Stephen L

    2017-01-01

    Attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy, in which the sample is pressed against an internal reflection element, is a popular technique for rapid IR spectral collection. However, depending on the accessory design, the pressure applied to the sample is not always well controlled. While collecting data from fabrics with heterogeneous coatings, we have observed systematic pressure-dependent changes in spectra that can be eliminated by more reproducible pressure control. We also described a pressure sensor adapted to work with an ATR tower to enable more precise control of pressure during ATR sampling.

  13. Advanced composite materials for precision segmented reflectors

    NASA Technical Reports Server (NTRS)

    Stein, Bland A.; Bowles, David E.

    1988-01-01

    The objective in the NASA Precision Segmented Reflector (PSR) project is to develop new composite material concepts for highly stable and durable reflectors with precision surfaces. The project focuses on alternate material concepts such as the development of new low coefficient of thermal expansion resins as matrices for graphite fiber reinforced composites, quartz fiber reinforced epoxies, and graphite reinforced glass. Low residual stress fabrication methods will be developed. When coupon specimens of these new material concepts have demonstrated the required surface accuracies and resistance to thermal distortion and microcracking, reflector panels will be fabricated and tested in simulated space environments. An important part of the program is the analytical modeling of environmental stability of these new composite materials concepts through constitutive equation development, modeling of microdamage in the composite matrix, and prediction of long term stability (including viscoelasticity). These analyses include both closed form and finite element solutions at the micro and macro levels.

  14. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  15. An open investigation of the reproducibility of cancer biology research.

    PubMed

    Errington, Timothy M; Iorns, Elizabeth; Gunn, William; Tan, Fraser Elisabeth; Lomax, Joelle; Nosek, Brian A

    2014-12-10

    It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility.

  16. An open investigation of the reproducibility of cancer biology research

    PubMed Central

    Errington, Timothy M; Iorns, Elizabeth; Gunn, William; Tan, Fraser Elisabeth; Lomax, Joelle; Nosek, Brian A

    2014-01-01

    It is widely believed that research that builds upon previously published findings has reproduced the original work. However, it is rare for researchers to perform or publish direct replications of existing results. The Reproducibility Project: Cancer Biology is an open investigation of reproducibility in preclinical cancer biology research. We have identified 50 high impact cancer biology articles published in the period 2010-2012, and plan to replicate a subset of experimental results from each article. A Registered Report detailing the proposed experimental designs and protocols for each subset of experiments will be peer reviewed and published prior to data collection. The results of these experiments will then be published in a Replication Study. The resulting open methodology and dataset will provide evidence about the reproducibility of high-impact results, and an opportunity to identify predictors of reproducibility. DOI: http://dx.doi.org/10.7554/eLife.04333.001 PMID:25490932

  17. Individual Drusen Segmentation and Repeatability and Reproducibility of Their Automated Quantification in Optical Coherence Tomography Images

    PubMed Central

    de Sisternes, Luis; Jonna, Gowtham; Greven, Margaret A.; Chen, Qiang; Leng, Theodore; Rubin, Daniel L.

    2017-01-01

    Purpose To introduce a novel method to segment individual drusen in spectral-domain optical coherence tomography (SD-OCT), and evaluate its accuracy, and repeatability/reproducibility of drusen quantifications extracted from the segmentation results. Methods Our method uses a smooth interpolation of the retinal pigment epithelium (RPE) outer boundary, fitted to candidate locations in proximity to Bruch's Membrane, to identify regions of substantial lifting in the inner-RPE or inner-segment boundaries, and then separates and evaluates individual druse independently. The study included 192 eyes from 129 patients. Accuracy of drusen segmentations was evaluated measuring the overlap ratio (OR) with manual markings, also comparing the results to a previously proposed method. Repeatability and reproducibility across scanning protocols of automated drusen quantifications were investigated in repeated SD-OCT volume pairs and compared with those measured by a commercial tool (Cirrus HD-OCT). Results Our segmentation method produced higher accuracy than a previously proposed method, showing similar differences to manual markings (0.72 ± 0.09 OR) as the measured intra- and interreader variability (0.78 ± 0.09 and 0.77 ± 0.09, respectively). The automated quantifications displayed high repeatability and reproducibility, showing a more stable behavior across scanning protocols in drusen area and volume measurements than the commercial software. Measurements of drusen slope and mean intensity showed significant differences across protocols. Conclusion Automated drusen outlines produced by our method show promising accurate results that seem relatively stable in repeated scans using the same or different scanning protocols. Translational Relevance The proposed method represents a viable tool to measure and track drusen measurements in early or intermediate age-related macular degeneration patients. PMID:28275527

  18. Visual inspection reliability for precision manufactured parts

    DOE PAGES

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  19. Precise and automated microfluidic sample preparation.

    SciTech Connect

    Crocker, Robert W.; Patel, Kamlesh D.; Mosier, Bruce P.; Harnett, Cindy K.

    2004-07-01

    Autonomous bio-chemical agent detectors require sample preparation involving multiplex fluid control. We have developed a portable microfluidic pump array for metering sub-microliter volumes at flowrates of 1-100 {micro}L/min. Each pump is composed of an electrokinetic (EK) pump and high-voltage power supply with 15-Hz feedback from flow sensors. The combination of high pump fluid impedance and active control results in precise fluid metering with nanoliter accuracy. Automated sample preparation will be demonstrated by labeling proteins with fluorescamine and subsequent injection to a capillary gel electrophoresis (CGE) chip.

  20. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  1. Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli

    PubMed Central

    Beal, Jacob; Haddock-Angelli, Traci; Gershater, Markus; de Mora, Kim; Lizarazo, Meagan; Hollenhorst, Jim; Rettberg, Randy

    2016-01-01

    We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices. PMID:26937966

  2. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    SciTech Connect

    Bates, C. R.; Pies, C.; Kempf, S.; Hengstler, D.; Fleischmann, A.; Gastaldo, L.; Enss, C.; Friedrich, S.

    2016-07-15

    Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  3. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    NASA Astrophysics Data System (ADS)

    Bates, C. R.; Pies, C.; Kempf, S.; Hengstler, D.; Fleischmann, A.; Gastaldo, L.; Enss, C.; Friedrich, S.

    2016-07-01

    We describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  4. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    DOE PAGES

    Bates, C. R.; Pies, C.; Kempf, S.; ...

    2016-07-15

    Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  5. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  6. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  7. Precision Medicine in Cancer Treatment

    Cancer.gov

    Precision medicine helps doctors select cancer treatments that are most likely to help patients based on a genetic understanding of their disease. Learn about the promise of precision medicine and the role it plays in cancer treatment.

  8. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  9. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  10. Precision spectroscopy of hydrogen and femtosecond laser frequency combs.

    PubMed

    Hänsch, T W; Alnis, J; Fendel, P; Fischer, M; Gohle, C; Herrmann, M; Holzwarth, R; Kolachevsky, N; Udem, Th; Zimmermann, M

    2005-09-15

    Precision spectroscopy of the simple hydrogen atom has inspired dramatic advances in optical frequency metrology: femtosecond laser optical frequency comb synthesizers have revolutionized the precise measurement of optical frequencies, and they provide a reliable clock mechanism for optical atomic clocks. Precision spectroscopy of the hydrogen 1S-2S two-photon resonance has reached an accuracy of 1.4 parts in 10(14), and considerable future improvements are envisioned. Such laboratory experiments are setting new limits for possible slow variations of the fine structure constant alpha and the magnetic moment of the caesium nucleus mu(Cs) in units of the Bohr magneton mu(B).

  11. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  12. Reproducibility of Frankfort Horizontal Plane on 3D Multi-Planar Reconstructed MR Images

    PubMed Central

    Daboul, Amro; Schwahn, Christian; Schaffner, Grit; Soehnel, Silvia; Samietz, Stefanie; Aljaghsi, Ahmad; Habes, Mohammad; Hegenscheid, Katrin; Puls, Ralf; Klinke, Thomas; Biffar, Reiner

    2012-01-01

    Objective The purpose of this study was to determine the accuracy and reliability of Frankfort horizontal plane identification using displays of multi-planar reconstructed MRI images, and propose it as a sufficiently stable and standardized reference plane for craniofacial structures. Materials and Methods MRI images of 43 subjects were obtained from the longitudinal population based cohort study SHIP-2 using a T1-weighted 3D sequence. Five examiners independently identified the three landmarks that form FH plane. Intra-examiner reproducibility and inter-examiner reliability, correlation coefficients (ICC), coefficient of variability and Bland-Altman plots were obtained for all landmarks coordinates to assess reproducibility. Intra-examiner reproducibility and inter-examiner reliability in terms of location and plane angulation were also assessed. Results Intra- and inter-examiner reliabilities for X, Y and Z coordinates of all three landmarks were excellent with ICC values ranging from 0.914 to 0.998. Differences among examiners were more in X and Z than in Y dimensions. The Bland–Altman analysis demonstrated excellent intra- as well as inter-examiner agreement between examiners in all coordinates for all landmarks. Intra-examiner reproducibility and inter-examiner reliability of the three landmarks in terms of distance showed mean differences between 1.3 to 2.9 mm, Mean differences in plane angulation were between 1.0° to 1.5° among examiners. Conclusion This study revealed excellent intra-examiner reproducibility and inter-examiner reliability of Frankfort Horizontal plane through 3D landmark identification in MRI. Sufficiently stable landmark-based reference plane could be used for different treatments and studies. PMID:23118970

  13. Validity and reproducibility of different combinations of methods for occlusal caries detection: an in vitro comparison.

    PubMed

    Souza-Zaroni, W C; Ciccone, J C; Souza-Gabriel, A E; Ramos, R P; Corona, S A M; Palma-Dibb, R G

    2006-01-01

    This study assessed the validity and reproducibility of different combinations of occlusal caries detection methods: visual examination (VE), laser fluorescence (LF) and radiographic examination (RE). Intra- and interexaminer reproducibilities were also assessed. Forty-seven extracted human molars were used and 121 sites, either suspected or not to be carious, were chosen. Occlusal surfaces were examined by 8 volunteers, assigned to three groups according to their level of knowledge and clinical experience on dental practice: group I, undergraduate students; group II, postgraduate students; group III, professors. Three combinations of methods were tested: A: VE+LF, B: VE+RE, C: VE+LF+RE. The examiners scored the sites using ranking scales and chose a final score based on their clinical experience. The gold standard was determined by histological examination of the sites. In general, LF and RE yielded poorer results than the combinations of methods. For combination A, group III showed the highest sensitivity, while group II showed the highest specificity. For combination B, group II showed moderate sensitivity whereas groups I and III exhibited low sensitivities; all groups of examiners reached substantial specificity. For combination C, all groups exhibited moderate sensitivity and substantial specificity. Interexaminer reproducibility ranged from fair to moderate for combinations A and C, while for combination B kappa values indicated moderate interexaminer reproducibility. It may be concluded that individual exams presented inferior performance than the conjunction of them. Combination C (VE+LF+RE) resulted in the best accuracy for all groups. The knowledge background of the examiners influenced their ability to detect caries lesions and affected interexaminer reproducibility.

  14. Mathematics for modern precision engineering.

    PubMed

    Scott, Paul J; Forbes, Alistair B

    2012-08-28

    The aim of precision engineering is the accurate control of geometry. For this reason, mathematics has a long association with precision engineering: from the calculation and correction of angular scales used in surveying and astronomical instrumentation to statistical averaging techniques used to increase precision. This study illustrates the enabling role the mathematical sciences are playing in precision engineering: modelling physical processes, instruments and complex geometries, statistical characterization of metrology systems and error compensation.

  15. Micromechanical silicon precision scale

    NASA Astrophysics Data System (ADS)

    Oja, Aarne S.; Sillanpaa, Teuvo; Seppae, H.; Kiihamaki, Jyrki; Seppala, P.; Karttunen, Jani; Riski, Kari

    2000-04-01

    A micro machined capacitive silicon scale has been designed and fabricated. It is intended for weighing masses on the order of 1 g at the resolution of about 1 ppm and below. The device consists of a micro machined SOI chip which is anodically bonded to a glass chip. The flexible electrode is formed in the SOI device layer. The other electrode is metallized on the glass and is divided into three sections. The sections are used for detecting tilting of the top electrode due to a possible off-centering of the mass load. The measuring circuit implements electrostatic force feedback and keeps the top electrode at a constant horizontal position irrespective of its mass loading. First measurements have demonstrated the stability allowing measurement of 1 g masses at an accuracy of 2...3 ppm.

  16. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  17. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  18. ChronRater: A simple approach to assessing the accuracy of age models from Holocene sediment cores

    NASA Astrophysics Data System (ADS)

    Kaufman, D. S.; Balascio, N. L.; McKay, N. P.; Sundqvist, H. S.

    2013-12-01

    We have assembled a database of previously published Holocene proxy climate records from the Arctic, with the goal of reconstructing the spatial-temporal pattern of past climate changes. The focus is on well-dated, highly resolved, continuous records that extend to at least 6 ka, most of which (90%) are from sedimentary sequences sampled in cores from lakes and oceans. The database includes the original geochronological data (radiocarbon ages) for each record so that the accuracy of the underlying age models can be assessed uniformly. Determining the accuracy of age control for sedimentary sequences is difficult because it depends on many factors, some of which are difficult to quantify. Nevertheless, the geochronological accuracy of each time series in the database must be assessed systematically to objectively identify those that are appropriate to address a particular level of temporal inquiry. We have therefore devised a scoring scheme to rate the accuracy of age models that focuses on the most important factors and uses just the most commonly published information to determine the overall geochronological accuracy. The algorithm, "ChronRater" is written in the open-source statistical package, R. It relies on three characteristics of dated materials and their downcore trends: (1) The delineation of the downcore trend, which is quantified based on three attributes, namely: (a) the frequency of ages, (b) the regularity of their spacing, and (c) the uniformity of the sedimentation rate. (2) The quality of the dated materials, as determined by: (a) the proportion of outliers and downcore reversals, and (b) the type of materials analyzed and the extent to which their ages are verified by independent information as judged by a five-point scale for the entire sequence of ages. And (3) the overall uncertainty in the calibrated ages, which includes the analytical precision and the associated calibrated age ranges. Although our geochronological accuracy score is

  19. Reproducibility study of fetal 3-D volumetry in the first trimester: effect of fetal size and rotational angle of VOCAL software.

    PubMed

    Papastefanou, Ioannis; Kappou, Dimitra; Souka, Athena P; Pilalis, Athanasios; Chrelias, Charalambos; Siristatidis, Charalambos; Kassanos, Dimitrios

    2014-05-01

    Intra- and inter-observer reproducibility of fetal volume measurement by 3-D ultrasound scan (using VOCAL [Virtual Organ Computer-Aided Analysis] software) in 27 fetuses at 7 to 13 wk was studied. For intra-observer variability, the mean difference (MD) and 95% limits of agreement (95% LOA) at 12°, 18° and 30° were MD(12) = 0.097, 95% LOA(12) = -0.87 to +1.06; MD(18) = 0.07, 95% LOA(18) = -1.31 to +1.45; and MD(30) = -0.07, 95% LOA(30) = -1.55 to +1.41. The standard deviation of the differences (SD(DIF)) increased with crown-rump length at 12° (p = 0.0016), 18° (p = 0.0011) and 30° (p = 0.02). For inter-observer variability, MD(12) = 0.15, 95% LOA(12) = -1.65 to +1.95; MD(18) = 0.042, 95% LOA(18) = -1.79 to +1.87; and MD(30) = 0.19, 95% LOA(30) = -1.24 to +1.62. SDDIF increased with crown-rump length at 18° (p = 0.0084) and 30° (p = 0.0073). The accuracy of fetal volume measurement was not influenced by rotational angle or fetal size. Precision deteriorated for wider rotational angles and larger fetuses.

  20. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  1. Precision Astronomy with Imperfect Deep Depletion CCDs

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher; LSST Sensor Team; PanSTARRS Team

    2014-01-01

    While thick CCDs do provide definite advantages in terms of increased quantum efficiency at wavelengths 700 nm<λ < 1.1 microns and reduced fringing from atmospheric emission lines, these devices also exhibit undesirable features that pose a challenge to precision determination of the positions, fluxes, and shapes of astronomical objects, and for the precision extraction of features in astronomical spectra. For example, the assumptions of a perfectly rectilinear pixel grid and of an intensity-independent point spread function become increasingly invalid as we push to higher precision measurements. Many of the effects seen in these devices arise from lateral electrical fields within the detector, that produce charge transport anomalies that have been previously misinterpreted as quantum efficiency variations. Performing simplistic flat-fielding therefore introduces systematic errors in the image processing pipeline. One measurement challenge we face is devising a combination of calibration methods and algorithms that can distinguish genuine quantum efficiency variations from charge transport effects. These device imperfections also confront spectroscopic applications, such as line centroid determination for precision radial velocity studies. Given the scientific benefits of improving both the precision and accuracy of astronomical measurements, we need to identify, characterize, and overcome these various detector artifacts. In retrospect, many of the detector features first identified in thick CCDs also afflict measurements made with more traditional CCD detectors, albeit often at a reduced level since the photocharge is subject to the perturbing influence of lateral electric fields for a shorter time interval. I provide a qualitative overview of the physical effects we think are responsible for the observed device properties, and provide some perspective for the work that lies ahead.

  2. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  3. Reproducible, high-throughput synthesis of colloidal nanocrystals for optimization in multidimensional parameter space.

    PubMed

    Chan, Emory M; Xu, Chenxu; Mao, Alvin W; Han, Gang; Owen, Jonathan S; Cohen, Bruce E; Milliron, Delia J

    2010-05-12

    While colloidal nanocrystals hold tremendous potential for both enhancing fundamental understanding of materials scaling and enabling advanced technologies, progress in both realms can be inhibited by the limited reproducibility of traditional synthetic methods and by the difficulty of optimizing syntheses over a large number of synthetic parameters. Here, we describe an automated platform for the reproducible synthesis of colloidal nanocrystals and for the high-throughput optimization of physical properties relevant to emerging applications of nanomaterials. This robotic platform enables precise control over reaction conditions while performing workflows analogous to those of traditional flask syntheses. We demonstrate control over the size, size distribution, kinetics, and concentration of reactions by synthesizing CdSe nanocrystals with 0.2% coefficient of variation in the mean diameters across an array of batch reactors and over multiple runs. Leveraging this precise control along with high-throughput optical and diffraction characterization, we effectively map multidimensional parameter space to tune the size and polydispersity of CdSe nanocrystals, to maximize the photoluminescence efficiency of CdTe nanocrystals, and to control the crystal phase and maximize the upconverted luminescence of lanthanide-doped NaYF(4) nanocrystals. On the basis of these demonstrative examples, we conclude that this automated synthesis approach will be of great utility for the development of diverse colloidal nanomaterials for electronic assemblies, luminescent biological labels, electroluminescent devices, and other emerging applications.

  4. The complex variable reproducing kernel particle method for the analysis of Kirchhoff plates

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.; Ma, H. P.

    2015-03-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for the bending problem of arbitrary Kirchhoff plates is presented. The advantage of the CVRKPM is that the shape function of a two-dimensional problem is obtained one-dimensional basis function. The CVRKPM is used to form the approximation function of the deflection of a Kirchhoff plate, the Galerkin weak form of the bending problem of Kirchhoff plates is adopted to obtain the discretized system equations, and the penalty method is employed to enforce the essential boundary conditions, then the corresponding formulae of the CVRKPM for the bending problem of Kirchhoff plates are presented in detail. Several numerical examples of Kirchhoff plates with different geometry and loads are given to demonstrate that the CVRKPM in this paper has higher computational precision and efficiency than the reproducing kernel particle method under the same node distribution. And the influences of the basis function, weight function, scaling factor, node distribution and penalty factor on the computational precision of the CVRKPM in this paper are discussed.

  5. Precise measurement of planeness.

    PubMed

    Schulz, G; Schwider, J

    1967-06-01

    Interference methods are reviewed-particularly those developed at the German Academy of Sciences in Berlin-with which the deviations of an optically flat surface from the ideal plane can be measured with a high degree of exactness. One aid to achieve this is the relative methods which measure the differences in planeness between two surfaces. These are then used in the absolute methods which determine the absolute planeness of a surface. This absolute determination can be effected in connection with a liquid surface, or (as done by the authors) only by suitable evaluation of relative measurements between unknown plates in various positional combinations. Experimentally, one uses two- or multiple-beam interference fringes of equal thickness(1) or of equal inclination. The fringes are observed visually, scanned, or photographed, and in part several wavelengths or curves of equal density (Aquidensiten) are employed. The survey also brings the following new methods: a relative method, where, with the aid of fringes of superposition, the fringe separation is subdivided equidistantly thus achieving an increase of measuring precision, and an absolute method which determines the deviations of a surface from ideal planeness along arbitrary central sections, without a liquid surface, from four relative interference photographs.

  6. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  7. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  8. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  9. Photographic copy of reproduced photograph dated 1942. Exterior view, west ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of reproduced photograph dated 1942. Exterior view, west elevation. Building camouflaged during World War II. - Grand Central Air Terminal, 1310 Air Way, Glendale, Los Angeles County, CA

  10. Reproducibility of computational workflows is automated using continuous analysis.

    PubMed

    Beaulieu-Jones, Brett K; Greene, Casey S

    2017-03-13

    Replication, validation and extension of experiments are crucial for scientific progress. Computational experiments are scriptable and should be easy to reproduce. However, computational analyses are designed and run in a specific computing environment, which may be difficult or impossible to match using written instructions. We report the development of continuous analysis, a workflow that enables reproducible computational analyses. Continuous analysis combines Docker, a container technology akin to virtual machines, with continuous integration, a software development technique, to automatically rerun a computational analysis whenever updates or improvements are made to source code or data. This enables researchers to reproduce results without contacting the study authors. Continuous analysis allows reviewers, editors or readers to verify reproducibility without manually downloading and rerunning code and can provide an audit trail for analyses of data that cannot be shared.

  11. Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereophotogrammetry.

    PubMed

    Plooij, J M; Swennen, G R J; Rangel, F A; Maal, T J J; Schutyser, F A C; Bronkhorst, E M; Kuijpers-Jagtman, A M; Bergé, S J

    2009-03-01

    In 3D photographs the bony structures are neither available nor palpable, therefore, the bone-related landmarks, such as the soft tissue gonion, need to be redefined. The purpose of this study was to determine the reproducibility and reliability of 49 soft tissue landmarks, including newly defined 3D bone-related soft tissue landmarks with the use of 3D stereophotogrammetric images. Two observers carried out soft-tissue analysis on 3D photographs twice for 20 patients. A reference frame and 49 landmarks were identified on each 3D photograph. Paired Student's t-test was used to test the reproducibility and Pearson's correlation coefficient to determine the reliability of the landmark identification. Intra- and interobserver reproducibility of the landmarks were high. The study showed a high reliability coefficient for intraobserver (0.97 (0.90 - 0.99)) and interobserver reliability (0.94 (0.69 - 0.99)). Identification of the landmarks in the midline was more precise than identification of the paired landmarks. In conclusion, the redefinition of bone-related soft tissue 3D landmarks in combination with the 3D photograph reference system resulted in an accurate and reliable 3D photograph based soft tissue analysis. This shows that hard tissue data are not needed to perform accurate soft tissue analysis.

  12. Repeatability and Reproducibility of Compression Strength Measurements Conducted According to ASTM E9

    NASA Technical Reports Server (NTRS)

    Luecke, William E.; Ma, Li; Graham, Stephen M.; Adler, Matthew A.

    2010-01-01

    Ten commercial laboratories participated in an interlaboratory study to establish the repeatability and reproducibility of compression strength tests conducted according to ASTM International Standard Test Method E9. The test employed a cylindrical aluminum AA2024-T351 test specimen. Participants measured elastic modulus and 0.2 % offset yield strength, YS(0.2 % offset), using an extensometer attached to the specimen. The repeatability and reproducibility of the yield strength measurement, expressed as coefficient of variations were cv(sub r)= 0.011 and cv(sub R)= 0.020 The reproducibility of the test across the laboratories was among the best that has been reported for uniaxial tests. The reported data indicated that using diametrically opposed extensometers, instead of a single extensometer doubled the precision of the test method. Laboratories that did not lubricate the ends of the specimen measured yield stresses and elastic moduli that were smaller than those measured in laboratories that lubricated the specimen ends. A finite element analysis of the test specimen deformation for frictionless and perfect friction could not explain the discrepancy, however. The modulus measured from stress-strain data were reanalyzed using a technique that finds the optimal fit range, and applies several quality checks to the data. The error in modulus measurements from stress-strain curves generally increased as the fit range decreased to less than 40 % of the stress range.

  13. Reproducibility of parameters of postocclusive reactive hyperemia measured by diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Vidal-Rosas, Ernesto E.; Billings, Stephen A.; Chico, Timothy; Coca, Daniel

    2016-06-01

    The application of near-infrared spectroscopy (NIRS) to assess microvascular function has shown promising results. An important limitation when using a single source-detector pair, however, is the lack of depth sensitivity. Diffuse optical tomography (DOT) overcomes this limitation using an array of sources and detectors that allow the reconstruction of volumetric hemodynamic changes. This study compares the key parameters of postocclusive reactive hyperemia measured in the forearm using standard NIRS and DOT. We show that while the mean parameter values are similar for the two techniques, DOT achieves much better reproducibility, as measured by the intraclass correlation coefficient (ICC). We show that DOT achieves high reproducibility for muscle oxygen consumption (ICC: 0.99), time to maximal HbO2 (ICC: 0.94), maximal HbO2 (ICC: 0.99), and time to maximal HbT (ICC: 0.99). Absolute reproducibility as measured by the standard error of measurement is consistently smaller and close to zero (ideal value) across all parameters measured by DOT compared to NIRS. We conclude that DOT provides a more robust characterization of the reactive hyperemic response and show how the availability of volumetric hemodynamic changes allows the identification of areas of temporal consistency, which could help characterize more precisely the microvasculature.

  14. Comment on "Estimating the reproducibility of psychological science".

    PubMed

    Gilbert, Daniel T; King, Gary; Pettigrew, Stephen; Wilson, Timothy D

    2016-03-04

    A paper from the Open Science Collaboration (Research Articles, 28 August 2015, aac4716) attempting to replicate 100 published studies suggests that the reproducibility of psychological science is surprisingly low. We show that this article contains three statistical errors and provides no support for such a conclusion. Indeed, the data are consistent with the opposite conclusion, namely, that the reproducibility of psychological science is quite high.

  15. Reproducibility with repeat CT in radiomics study for rectal cancer

    PubMed Central

    Hu, Panpan; Wang, Jiazhou; Zhong, Haoyu; Zhou, Zhen; Shen, Lijun; Hu, Weigang; Zhang, Zhen

    2016-01-01

    Purpose To evaluate the reproducibility of radiomics features by repeating computed tomographic (CT) scans in rectal cancer. To choose stable radiomics features for rectal cancer. Results Volume normalized features are much more reproducible than unnormalized features. The average value of all slices is the most reproducible feature type in rectal cancer. Different filters have little effect for the reproducibility of radiomics features. For the average type features, 496 out of 775 features showed high reproducibility (ICC ≥ 0.8), 225 out of 775 features showed medium reproducibility (0.8 > ICC ≥ 0.5) and 54 out of 775 features showed low reproducibility (ICC < 0.5). Methods 40 rectal cancer patients with stage II were enrolled in this study, each of whom underwent two CT scans within average 8.7 days. 775 radiomics features were defined in this study. For each features, five different values (value from the largest slice, maximum value, minimum value, average value of all slices and value from superposed intermediate matrix) were extracted. Meanwhile a LOG filter with different parameters was applied to these images to find stable filter value. Concordance correlation coefficients (CCC) and inter-class correlation coefficients (ICC) of two CT scans were calculated to assess the reproducibility, based on original features and volume normalized features. Conclusions Features are recommended to be normalized to volume in radiomics analysis. The average type radiomics features are the most stable features in rectal cancer. Further analysis of these features of rectal cancer can be warranted for treatment monitoring and prognosis prediction. PMID:27669756

  16. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  17. On The Reproducibility of Seasonal Land-surface Climate

    SciTech Connect

    Phillips, T J

    2004-10-22

    The sensitivity of the continental seasonal climate to initial conditions is estimated from an ensemble of decadal simulations of an atmospheric general circulation model with the same specifications of radiative forcings and monthly ocean boundary conditions, but with different initial states of atmosphere and land. As measures of the ''reproducibility'' of continental climate for different initial conditions, spatio-temporal correlations are computed across paired realizations of eleven model land-surface variables in which the seasonal cycle is either included or excluded--the former case being pertinent to climate simulation, and the latter to seasonal anomaly prediction. It is found that the land-surface variables which include the seasonal cycle are impacted only marginally by changes in initial conditions; moreover, their seasonal climatologies exhibit high spatial reproducibility. In contrast, the reproducibility of a seasonal land-surface anomaly is generally low, although it is substantially higher in the Tropics; its spatial reproducibility also markedly fluctuates in tandem with warm and cold phases of the El Nino/Southern Oscillation. However, the overall degree of reproducibility depends strongly on the particular land-surface anomaly considered. It is also shown that the predictability of a land-surface anomaly implied by its reproducibility statistics is consistent with what is inferred from more conventional predictability metrics. Implications of these results for climate model intercomparison projects and for operational forecasts of seasonal continental climate also are elaborated.

  18. Transparency, usability, and reproducibility: Guiding principles for improving comparative databases using primates as examples.

    PubMed

    Borries, Carola; Sandel, Aaron A; Koenig, Andreas; Fernandez-Duque, Eduardo; Kamilar, Jason M; Amoroso, Caroline R; Barton, Robert A; Bray, Joel; Di Fiore, Anthony; Gilby, Ian C; Gordon, Adam D; Mundry, Roger; Port, Markus; Powell, Lauren E; Pusey, Anne E; Spriggs, Amanda; Nunn, Charles L

    2016-09-01

    Recent decades have seen rapid development of new analytical methods to investigate patterns of interspecific variation. Yet these cutting-edge statistical analyses often rely on data of questionable origin, varying accuracy, and weak comparability, which seem to have reduced the reproducibility of studies. It is time to improve the transparency of comparative data while also making these improved data more widely available. We, the authors, met to discuss how transparency, usability, and reproducibility of comparative data can best be achieved. We propose four guiding principles: 1) data identification with explicit operational definitions and complete descriptions of methods; 2) inclusion of metadata that capture key characteristics of the data, such as sample size, geographic coordinates, and nutrient availability (for example, captive versus wild animals); 3) documentation of the original reference for each datum; and 4) facilitation of effective interactions with the data via user friendly and transparent interfaces. We urge reviewers, editors, publishers, database developers and users, funding agencies, researchers publishing their primary data, and those performing comparative analyses to embrace these standards to increase the transparency, usability, and reproducibility of comparative studies.

  19. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  20. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  1. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  2. [Precision nutrition in the era of precision medicine].

    PubMed

    Chen, P Z; Wang, H

    2016-12-06

    Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.

  3. Design of high-precision ranging system for laser fuze

    NASA Astrophysics Data System (ADS)

    Chen, Shanshan; Zhang, He; Xu, Xiaobin

    2016-10-01

    According to the problem of the high-precision ranging in the circumferential scanning probe laser proximity fuze, a new type of pulsed laser ranging system has been designed. The laser transmitting module, laser receiving module and ranging processing module have been designed respectively. The factors affecting the ranging accuracy are discussed. And the method of improving the ranging accuracy is studied. The high-precision ranging system adopts the general high performance microprocessor C8051FXXX as the core. And the time interval measurement chip TDC-GP21 was used to implement the system. A PCB circuit board was processed to carry on the experiment. The results of the experiment prove that a centimeter level accuracy ranging system has been achieved. The works can offer reference for ranging system design of the circumferential scanning probe laser proximity fuze.

  4. Precision respiratory medicine and the microbiome.

    PubMed

    Rogers, Geraint B; Wesselingh, Steve

    2016-01-01

    A decade of rapid technological advances has provided an exciting opportunity to incorporate information relating to a range of potentially important disease determinants in the clinical decision-making process. Access to highly detailed data will enable respiratory medicine to evolve from one-size-fits-all models of care, which are associated with variable clinical effectiveness and high rates of side-effects, to precision approaches, where treatment is tailored to individual patients. The human microbiome has increasingly been recognised as playing an important part in determining disease course and response to treatment. Its inclusion in precision models of respiratory medicine, therefore, is essential. Analysis of the microbiome provides an opportunity to develop novel prognostic markers for airways disease, improve definition of clinical phenotypes, develop additional guidance to aid treatment selection, and increase the accuracy of indicators of treatment effect. In this Review we propose that collaboration between researchers and clinicians is needed if respiratory medicine is to replicate the successes of precision medicine seen in other clinical specialties.

  5. Personalized Proteomics: The Future of Precision Medicine.

    PubMed

    Duarte, Trevor T; Spencer, Charles T

    2016-01-01

    Medical diagnostics and treatment has advanced from a one size fits all science to treatment of the patient as a unique individual. Currently, this is limited solely to genetic analysis. However, epigenetic, transcriptional, proteomic, posttranslational modifications, metabolic, and environmental factors influence a patient's response to disease and treatment. As more analytical and diagnostic techniques are incorporated into medical practice, the personalized medicine initiative transitions to precision medicine giving a holistic view of the patient's condition. The high accuracy and sensitivity of mass spectrometric analysis of proteomes is well suited for the incorporation of proteomics into precision medicine. This review begins with an overview of the advance to precision medicine and the current state of the art in technology and instrumentation for mass spectrometry analysis. Thereafter, it focuses on the benefits and potential uses for personalized proteomic analysis in the diagnostic and treatment of individual patients. In conclusion, it calls for a synthesis between basic science and clinical researchers with practicing clinicians to design proteomic studies to generate meaningful and applicable translational medicine. As clinical proteomics is just beginning to come out of its infancy, this overview is provided for the new initiate.

  6. Personalized Proteomics: The Future of Precision Medicine

    PubMed Central

    Duarte, Trevor T.; Spencer, Charles T.

    2016-01-01

    Medical diagnostics and treatment has advanced from a one size fits all science to treatment of the patient as a unique individual. Currently, this is limited solely to genetic analysis. However, epigenetic, transcriptional, proteomic, posttranslational modifications, metabolic, and environmental factors influence a patient’s response to disease and treatment. As more analytical and diagnostic techniques are incorporated into medical practice, the personalized medicine initiative transitions to precision medicine giving a holistic view of the patient’s condition. The high accuracy and sensitivity of mass spectrometric analysis of proteomes is well suited for the incorporation of proteomics into precision medicine. This review begins with an overview of the advance to precision medicine and the current state of the art in technology and instrumentation for mass spectrometry analysis. Thereafter, it focuses on the benefits and potential uses for personalized proteomic analysis in the diagnostic and treatment of individual patients. In conclusion, it calls for a synthesis between basic science and clinical researchers with practicing clinicians to design proteomic studies to generate meaningful and applicable translational medicine. As clinical proteomics is just beginning to come out of its infancy, this overview is provided for the new initiate. PMID:27882306

  7. Precision mass measurements of highly charged ions

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, A. A.; Bale, J. C.; Brunner, T.; Chaudhuri, A.; Chowdhury, U.; Ettenauer, S.; Frekers, D.; Gallant, A. T.; Grossheim, A.; Lennarz, A.; Mane, E.; MacDonald, T. D.; Schultz, B. E.; Simon, M. C.; Simon, V. V.; Dilling, J.

    2012-10-01

    The reputation of Penning trap mass spectrometry for accuracy and precision was established with singly charged ions (SCI); however, the achievable precision and resolving power can be extended by using highly charged ions (HCI). The TITAN facility has demonstrated these enhancements for long-lived (T1/2>=50 ms) isobars and low-lying isomers, including ^71Ge^21+, ^74Rb^8+, ^78Rb^8+, and ^98Rb^15+. The Q-value of ^71Ge enters into the neutrino cross section, and the use of HCI reduced the resolving power required to distinguish the isobars from 3 x 10^5 to 20. The precision achieved in the measurement of ^74Rb^8+, a superallowed β-emitter and candidate to test the CVC hypothesis, rivaled earlier measurements with SCI in a fraction of the time. The 111.19(22) keV isomeric state in ^78Rb was resolved from the ground state. Mass measurements of neutron-rich Rb and Sr isotopes near A = 100 aid in determining the r-process pathway. Advanced ion manipulation techniques and recent results will be presented.

  8. Reproducibility of residual cancer burden for prognostic assessment of breast cancer after neoadjuvant chemotherapy.

    PubMed

    Peintinger, Florentia; Sinn, Bruno; Hatzis, Christos; Albarracin, Constance; Downs-Kelly, Erinn; Morkowski, Jerzy; Gould, Rebekah; Symmans, W Fraser

    2015-07-01

    The residual cancer burden index was developed as a method to quantify residual disease ranging from pathological complete response to extensive residual disease. The aim of this study was to evaluate the inter-Pathologist reproducibility in the residual cancer burden index score and category, and in their long-term prognostic utility. Pathology slides and pathology reports of 100 cases from patients treated in a randomized neoadjuvant trial were reviewed independently by five pathologists. The size of tumor bed, average percent overall tumor cellularity, average percent of the in situ cancer within the tumor bed, size of largest axillary metastasis, and number of involved nodes were assessed separately by each pathologist and residual cancer burden categories were assigned to each case following calculation of the numerical residual cancer burden index score. Inter-Pathologist agreement in the assessment of the continuous residual cancer burden score and its components and agreement in the residual cancer burden category assignments were analyzed. The overall concordance correlation coefficient for the agreement in residual cancer burden score among pathologists was 0.931 (95% confidence interval (CI) 0.908-0.949). Overall accuracy of the residual cancer burden score determination was 0.989. The kappa coefficient for overall agreement in the residual cancer burden category assignments was 0.583 (95% CI 0.539-0.626). The metastatic component of the residual cancer burden index showed stronger concordance between pathologists (overall concordance correlation coefficient=0.980; 95% CI 0.954-0.992), than the primary component (overall concordance correlation coefficient=0.795; 95% CI 0.716-0.853). At a median follow-up of 12 years residual cancer burden determined by each of the pathologists had the same prognostic accuracy for distant recurrence-free and survival (overall concordance correlation coefficient=0.995; 95% CI 0.989-0.998). Residual cancer burden

  9. Accuracy Evaluation of Electron-Probe Microanalysis as Applied to Semiconductors and Silicates

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2003-01-01

    An evaluation of precision and accuracy will be presented for representative semiconductor and silicate compositions. The accuracy of electron-probe analysis depends on high precision measurements and instrumental calibration, as well as correction algorithms and fundamental parameter data sets. A critical assessment of correction algorithms and mass absorption coefficient data sets can be made using the alpha factor technique. Alpha factor analysis can be used to identify systematic errors in data sets and also of microprobe standards used for calibration.

  10. Apparatus Makes Precisely Saturated Solutions

    NASA Technical Reports Server (NTRS)

    Pusey, Marc L.

    1989-01-01

    Simple laboratory apparatus establishes equilibrium conditions of temperature and concentration in solutions for use in precise measurements of saturation conditions. With equipment typical measurement of saturation concentration of protein in solution established and measured within about 24 hours. Precisely saturated solution made by passing solvent or solution slowly along column packed with solute at precisely controlled temperature. If necessary, flow stopped for experimentally determined interval to allow equilibrium to be established in column.

  11. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  12. Centroid precision and orientation precision of planar localization microscopy.

    PubMed

    McGray, C; Copeland, C R; Stavis, S M; Geist, J

    2016-09-01

    The concept of localization precision, which is essential to localization microscopy, is formally extended from optical point sources to microscopic rigid bodies. Measurement functions are presented to calculate the planar pose and motion of microscopic rigid bodies from localization microscopy data. Physical lower bounds on the associated uncertainties - termed centroid precision and orientation precision - are derived analytically in terms of the characteristics of the optical measurement system and validated numerically by Monte Carlo simulations. The practical utility of these expressions is demonstrated experimentally by an analysis of the motion of a microelectromechanical goniometer indicated by a sparse constellation of fluorescent nanoparticles. Centroid precision and orientation precision, as developed here, are useful concepts due to the generality of the expressions and the widespread interest in localization microscopy for super-resolution imaging and particle tracking.

  13. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  14. Using prediction markets to estimate the reproducibility of scientific research

    PubMed Central

    Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A.; Johannesson, Magnus

    2015-01-01

    Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants’ individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a “statistically significant” finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. PMID:26553988

  15. Validation and reproducibility of an Australian caffeine food frequency questionnaire.

    PubMed

    Watson, E J; Kohler, M; Banks, S; Coates, A M

    2017-01-05

    The aim of this study was to measure validity and reproducibility of a caffeine food frequency questionnaire (C-FFQ) developed for the Australian population. The C-FFQ was designed to assess average daily caffeine consumption using four categories of food and beverages including; energy drinks; soft drinks/soda; coffee and tea and chocolate (food and drink). Participants completed a seven-day food diary immediately followed by the C-FFQ on two consecutive days. The questionnaire was first piloted in 20 adults, and then, a validity/reproducibility study was conducted (n = 90 adults). The C-FFQ showed moderate correlations (r = .60), fair agreement (mean difference 63 mg) and reasonable quintile rankings indicating fair to moderate agreement with the seven-day food diary. To test reproducibility, the C-FFQ was compared to itself and showed strong correlations (r = .90), good quintile rankings and strong kappa values (κ = 0.65), indicating strong reproducibility. The C-FFQ shows adequate validity and reproducibility and will aid researchers in Australia to quantify caffeine consumption.

  16. Using prediction markets to estimate the reproducibility of scientific research.

    PubMed

    Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A; Johannesson, Magnus

    2015-12-15

    Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants' individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a "statistically significant" finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications.

  17. Reproducibility of radiomics for deciphering tumor phenotype with imaging

    PubMed Central

    Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H.

    2016-01-01

    Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative image features derived from routine radiologic imaging to improve cancer diagnosis, prognosis, prediction and response to therapy. Although radiomic features must be reproducible to qualify as biomarkers for clinical care, little is known about how routine imaging acquisition techniques/parameters affect reproducibility. To begin to fill this knowledge gap, we assessed the reproducibility of a comprehensive, commonly-used set of radiomic features using a unique, same-day repeat computed tomography data set from lung cancer patients. Each scan was reconstructed at 6 imaging settings, varying slice thicknesses (1.25 mm, 2.5 mm and 5 mm) and reconstruction algorithms (sharp, smooth). Reproducibility was assessed using the repeat scans reconstructed at identical imaging setting (6 settings in total). In separate analyses, we explored differences in radiomic features due to different imaging parameters by assessing the agreement of these radiomic features extracted from the repeat scans reconstructed at the same slice thickness but different algorithms (3 settings in total). Our data suggest that radiomic features are reproducible over a wide range of imaging settings. However, smooth and sharp reconstruction algorithms should not be used interchangeably. These findings will raise awareness of the importance of properly setting imaging acquisition parameters in radiomics/radiogenomics research. PMID:27009765

  18. Reproducibility of radiomics for deciphering tumor phenotype with imaging.

    PubMed

    Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H

    2016-03-24

    Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative image features derived from routine radiologic imaging to improve cancer diagnosis, prognosis, prediction and response to therapy. Although radiomic features must be reproducible to qualify as biomarkers for clinical care, little is known about how routine imaging acquisition techniques/parameters affect reproducibility. To begin to fill this knowledge gap, we assessed the reproducibility of a comprehensive, commonly-used set of radiomic features using a unique, same-day repeat computed tomography data set from lung cancer patients. Each scan was reconstructed at 6 imaging settings, varying slice thicknesses (1.25 mm, 2.5 mm and 5 mm) and reconstruction algorithms (sharp, smooth). Reproducibility was assessed using the repeat scans reconstructed at identical imaging setting (6 settings in total). In separate analyses, we explored differences in radiomic features due to different imaging parameters by assessing the agreement of these radiomic features extracted from the repeat scans reconstructed at the same slice thickness but different algorithms (3 settings in total). Our data suggest that radiomic features are reproducible over a wide range of imaging settings. However, smooth and sharp reconstruction algorithms should not be used interchangeably. These findings will raise awareness of the importance of properly setting imaging acquisition parameters in radiomics/radiogenomics research.

  19. Reproducibility of radiomics for deciphering tumor phenotype with imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H.

    2016-03-01

    Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative i